2025-06-01 22:04:57.189137 | Job console starting 2025-06-01 22:04:57.206181 | Updating git repos 2025-06-01 22:04:57.298776 | Cloning repos into workspace 2025-06-01 22:04:57.483398 | Restoring repo states 2025-06-01 22:04:57.507118 | Merging changes 2025-06-01 22:04:57.507143 | Checking out repos 2025-06-01 22:04:57.748764 | Preparing playbooks 2025-06-01 22:04:58.320698 | Running Ansible setup 2025-06-01 22:05:02.739505 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-01 22:05:03.544771 | 2025-06-01 22:05:03.544967 | PLAY [Base pre] 2025-06-01 22:05:03.562516 | 2025-06-01 22:05:03.562672 | TASK [Setup log path fact] 2025-06-01 22:05:03.593402 | orchestrator | ok 2025-06-01 22:05:03.611204 | 2025-06-01 22:05:03.611355 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-01 22:05:03.653237 | orchestrator | ok 2025-06-01 22:05:03.665626 | 2025-06-01 22:05:03.665754 | TASK [emit-job-header : Print job information] 2025-06-01 22:05:03.725308 | # Job Information 2025-06-01 22:05:03.725595 | Ansible Version: 2.16.14 2025-06-01 22:05:03.725656 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-06-01 22:05:03.725716 | Pipeline: post 2025-06-01 22:05:03.725758 | Executor: 521e9411259a 2025-06-01 22:05:03.725830 | Triggered by: https://github.com/osism/testbed/commit/d6099138e48b52987bb07a725af029effb071be4 2025-06-01 22:05:03.725871 | Event ID: 94ba7f50-3f23-11f0-9e21-56db86a6d01c 2025-06-01 22:05:03.735588 | 2025-06-01 22:05:03.735730 | LOOP [emit-job-header : Print node information] 2025-06-01 22:05:03.876221 | orchestrator | ok: 2025-06-01 22:05:03.876528 | orchestrator | # Node Information 2025-06-01 22:05:03.876587 | orchestrator | Inventory Hostname: orchestrator 2025-06-01 22:05:03.876630 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-01 22:05:03.876670 | orchestrator | Username: zuul-testbed04 2025-06-01 22:05:03.876707 | orchestrator | Distro: Debian 12.11 2025-06-01 22:05:03.876749 | orchestrator | Provider: static-testbed 2025-06-01 22:05:03.876806 | orchestrator | Region: 2025-06-01 22:05:03.876847 | orchestrator | Label: testbed-orchestrator 2025-06-01 22:05:03.876884 | orchestrator | Product Name: OpenStack Nova 2025-06-01 22:05:03.876918 | orchestrator | Interface IP: 81.163.193.140 2025-06-01 22:05:03.900745 | 2025-06-01 22:05:03.900942 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-01 22:05:04.434644 | orchestrator -> localhost | changed 2025-06-01 22:05:04.445664 | 2025-06-01 22:05:04.445853 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-01 22:05:05.568819 | orchestrator -> localhost | changed 2025-06-01 22:05:05.594136 | 2025-06-01 22:05:05.594305 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-01 22:05:05.902489 | orchestrator -> localhost | ok 2025-06-01 22:05:05.910214 | 2025-06-01 22:05:05.910373 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-01 22:05:05.940937 | orchestrator | ok 2025-06-01 22:05:05.958368 | orchestrator | included: /var/lib/zuul/builds/2b9a5e4242674de7b2ac603f8dfc33e2/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-01 22:05:05.966925 | 2025-06-01 22:05:05.967050 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-01 22:05:07.873102 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-01 22:05:07.873667 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/2b9a5e4242674de7b2ac603f8dfc33e2/work/2b9a5e4242674de7b2ac603f8dfc33e2_id_rsa 2025-06-01 22:05:07.873828 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/2b9a5e4242674de7b2ac603f8dfc33e2/work/2b9a5e4242674de7b2ac603f8dfc33e2_id_rsa.pub 2025-06-01 22:05:07.873915 | orchestrator -> localhost | The key fingerprint is: 2025-06-01 22:05:07.874008 | orchestrator -> localhost | SHA256:Tlz5JbelM+EJJy45t63Ui/54UOFhRzofdsxHsQm8nR8 zuul-build-sshkey 2025-06-01 22:05:07.874112 | orchestrator -> localhost | The key's randomart image is: 2025-06-01 22:05:07.874207 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-01 22:05:07.874272 | orchestrator -> localhost | | .. o+| 2025-06-01 22:05:07.874336 | orchestrator -> localhost | | . .==+| 2025-06-01 22:05:07.874395 | orchestrator -> localhost | | o +oXOB| 2025-06-01 22:05:07.874452 | orchestrator -> localhost | | . . + X+E+| 2025-06-01 22:05:07.874508 | orchestrator -> localhost | | S + +.B +| 2025-06-01 22:05:07.874578 | orchestrator -> localhost | | o +.+ o.| 2025-06-01 22:05:07.874636 | orchestrator -> localhost | | . o.o | 2025-06-01 22:05:07.874692 | orchestrator -> localhost | | . +.. | 2025-06-01 22:05:07.874751 | orchestrator -> localhost | | .=oo | 2025-06-01 22:05:07.874878 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-01 22:05:07.875096 | orchestrator -> localhost | ok: Runtime: 0:00:01.332519 2025-06-01 22:05:07.890252 | 2025-06-01 22:05:07.890427 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-01 22:05:07.939007 | orchestrator | ok 2025-06-01 22:05:07.953278 | orchestrator | included: /var/lib/zuul/builds/2b9a5e4242674de7b2ac603f8dfc33e2/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-01 22:05:07.962954 | 2025-06-01 22:05:07.963066 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-01 22:05:07.988493 | orchestrator | skipping: Conditional result was False 2025-06-01 22:05:08.004953 | 2025-06-01 22:05:08.005117 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-01 22:05:08.606162 | orchestrator | changed 2025-06-01 22:05:08.614476 | 2025-06-01 22:05:08.614636 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-01 22:05:08.912532 | orchestrator | ok 2025-06-01 22:05:08.925746 | 2025-06-01 22:05:08.926002 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-01 22:05:09.354242 | orchestrator | ok 2025-06-01 22:05:09.365321 | 2025-06-01 22:05:09.365478 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-01 22:05:09.809740 | orchestrator | ok 2025-06-01 22:05:09.818285 | 2025-06-01 22:05:09.818421 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-01 22:05:09.843571 | orchestrator | skipping: Conditional result was False 2025-06-01 22:05:09.857376 | 2025-06-01 22:05:09.857524 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-01 22:05:10.338341 | orchestrator -> localhost | changed 2025-06-01 22:05:10.361930 | 2025-06-01 22:05:10.362072 | TASK [add-build-sshkey : Add back temp key] 2025-06-01 22:05:10.752428 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/2b9a5e4242674de7b2ac603f8dfc33e2/work/2b9a5e4242674de7b2ac603f8dfc33e2_id_rsa (zuul-build-sshkey) 2025-06-01 22:05:10.753088 | orchestrator -> localhost | ok: Runtime: 0:00:00.017791 2025-06-01 22:05:10.771631 | 2025-06-01 22:05:10.771845 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-01 22:05:11.216129 | orchestrator | ok 2025-06-01 22:05:11.226829 | 2025-06-01 22:05:11.226997 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-01 22:05:11.254077 | orchestrator | skipping: Conditional result was False 2025-06-01 22:05:11.308266 | 2025-06-01 22:05:11.308419 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-01 22:05:11.735190 | orchestrator | ok 2025-06-01 22:05:11.753209 | 2025-06-01 22:05:11.753408 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-01 22:05:11.801454 | orchestrator | ok 2025-06-01 22:05:11.811810 | 2025-06-01 22:05:11.811963 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-01 22:05:12.131068 | orchestrator -> localhost | ok 2025-06-01 22:05:12.147160 | 2025-06-01 22:05:12.147347 | TASK [validate-host : Collect information about the host] 2025-06-01 22:05:13.363668 | orchestrator | ok 2025-06-01 22:05:13.377641 | 2025-06-01 22:05:13.377795 | TASK [validate-host : Sanitize hostname] 2025-06-01 22:05:13.447723 | orchestrator | ok 2025-06-01 22:05:13.453663 | 2025-06-01 22:05:13.453848 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-01 22:05:14.109604 | orchestrator -> localhost | changed 2025-06-01 22:05:14.123234 | 2025-06-01 22:05:14.123419 | TASK [validate-host : Collect information about zuul worker] 2025-06-01 22:05:14.560689 | orchestrator | ok 2025-06-01 22:05:14.569744 | 2025-06-01 22:05:14.569924 | TASK [validate-host : Write out all zuul information for each host] 2025-06-01 22:05:15.167453 | orchestrator -> localhost | changed 2025-06-01 22:05:15.184855 | 2025-06-01 22:05:15.185027 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-01 22:05:15.485746 | orchestrator | ok 2025-06-01 22:05:15.494795 | 2025-06-01 22:05:15.494999 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-01 22:05:48.072388 | orchestrator | changed: 2025-06-01 22:05:48.072629 | orchestrator | .d..t...... src/ 2025-06-01 22:05:48.072664 | orchestrator | .d..t...... src/github.com/ 2025-06-01 22:05:48.072690 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-01 22:05:48.072713 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-01 22:05:48.072757 | orchestrator | RedHat.yml 2025-06-01 22:05:48.083317 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-01 22:05:48.083335 | orchestrator | RedHat.yml 2025-06-01 22:05:48.083387 | orchestrator | = 1.53.0"... 2025-06-01 22:06:00.245173 | orchestrator | 22:06:00.244 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-06-01 22:06:01.250810 | orchestrator | 22:06:01.249 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-01 22:06:02.268738 | orchestrator | 22:06:02.268 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-01 22:06:03.505662 | orchestrator | 22:06:03.505 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-01 22:06:04.534040 | orchestrator | 22:06:04.533 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-01 22:06:05.501343 | orchestrator | 22:06:05.501 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-01 22:06:06.484218 | orchestrator | 22:06:06.484 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-01 22:06:06.484286 | orchestrator | 22:06:06.484 STDOUT terraform: Providers are signed by their developers. 2025-06-01 22:06:06.484292 | orchestrator | 22:06:06.484 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-01 22:06:06.484309 | orchestrator | 22:06:06.484 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-01 22:06:06.484364 | orchestrator | 22:06:06.484 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-01 22:06:06.484428 | orchestrator | 22:06:06.484 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-01 22:06:06.484469 | orchestrator | 22:06:06.484 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-01 22:06:06.484493 | orchestrator | 22:06:06.484 STDOUT terraform: you run "tofu init" in the future. 2025-06-01 22:06:06.484736 | orchestrator | 22:06:06.484 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-01 22:06:06.484775 | orchestrator | 22:06:06.484 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-01 22:06:06.484800 | orchestrator | 22:06:06.484 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-01 22:06:06.484807 | orchestrator | 22:06:06.484 STDOUT terraform: should now work. 2025-06-01 22:06:06.484856 | orchestrator | 22:06:06.484 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-01 22:06:06.484910 | orchestrator | 22:06:06.484 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-01 22:06:06.484956 | orchestrator | 22:06:06.484 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-01 22:06:07.287569 | orchestrator | 22:06:07.287 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-06-01 22:06:07.539931 | orchestrator | 22:06:07.539 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-01 22:06:07.539992 | orchestrator | 22:06:07.539 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-01 22:06:07.539999 | orchestrator | 22:06:07.539 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-01 22:06:07.540004 | orchestrator | 22:06:07.539 STDOUT terraform: for this configuration. 2025-06-01 22:06:07.756042 | orchestrator | 22:06:07.755 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-06-01 22:06:07.875877 | orchestrator | 22:06:07.875 STDOUT terraform: ci.auto.tfvars 2025-06-01 22:06:08.076355 | orchestrator | 22:06:08.076 STDOUT terraform: default_custom.tf 2025-06-01 22:06:08.783293 | orchestrator | 22:06:08.783 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-06-01 22:06:09.722728 | orchestrator | 22:06:09.721 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-01 22:06:10.259418 | orchestrator | 22:06:10.258 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-01 22:06:10.459975 | orchestrator | 22:06:10.459 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-01 22:06:10.460116 | orchestrator | 22:06:10.459 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-01 22:06:10.460128 | orchestrator | 22:06:10.460 STDOUT terraform:  + create 2025-06-01 22:06:10.460138 | orchestrator | 22:06:10.460 STDOUT terraform:  <= read (data resources) 2025-06-01 22:06:10.460188 | orchestrator | 22:06:10.460 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-01 22:06:10.460371 | orchestrator | 22:06:10.460 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-01 22:06:10.460452 | orchestrator | 22:06:10.460 STDOUT terraform:  # (config refers to values not yet known) 2025-06-01 22:06:10.460535 | orchestrator | 22:06:10.460 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-01 22:06:10.460618 | orchestrator | 22:06:10.460 STDOUT terraform:  + checksum = (known after apply) 2025-06-01 22:06:10.460696 | orchestrator | 22:06:10.460 STDOUT terraform:  + created_at = (known after apply) 2025-06-01 22:06:10.460776 | orchestrator | 22:06:10.460 STDOUT terraform:  + file = (known after apply) 2025-06-01 22:06:10.460859 | orchestrator | 22:06:10.460 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.460938 | orchestrator | 22:06:10.460 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.461016 | orchestrator | 22:06:10.460 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-01 22:06:10.461114 | orchestrator | 22:06:10.461 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-01 22:06:10.461167 | orchestrator | 22:06:10.461 STDOUT terraform:  + most_recent = true 2025-06-01 22:06:10.461246 | orchestrator | 22:06:10.461 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:06:10.461324 | orchestrator | 22:06:10.461 STDOUT terraform:  + protected = (known after apply) 2025-06-01 22:06:10.461403 | orchestrator | 22:06:10.461 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.461482 | orchestrator | 22:06:10.461 STDOUT terraform:  + schema = (known after apply) 2025-06-01 22:06:10.461562 | orchestrator | 22:06:10.461 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-01 22:06:10.461641 | orchestrator | 22:06:10.461 STDOUT terraform:  + tags = (known after apply) 2025-06-01 22:06:10.461724 | orchestrator | 22:06:10.461 STDOUT terraform:  + updated_at = (known after apply) 2025-06-01 22:06:10.461766 | orchestrator | 22:06:10.461 STDOUT terraform:  } 2025-06-01 22:06:10.461896 | orchestrator | 22:06:10.461 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-01 22:06:10.461976 | orchestrator | 22:06:10.461 STDOUT terraform:  # (config refers to values not yet known) 2025-06-01 22:06:10.462133 | orchestrator | 22:06:10.461 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-01 22:06:10.462212 | orchestrator | 22:06:10.462 STDOUT terraform:  + checksum = (known after apply) 2025-06-01 22:06:10.462292 | orchestrator | 22:06:10.462 STDOUT terraform:  + created_at = (known after apply) 2025-06-01 22:06:10.462374 | orchestrator | 22:06:10.462 STDOUT terraform:  + file = (known after apply) 2025-06-01 22:06:10.462454 | orchestrator | 22:06:10.462 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.462571 | orchestrator | 22:06:10.462 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.462649 | orchestrator | 22:06:10.462 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-01 22:06:10.462728 | orchestrator | 22:06:10.462 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-01 22:06:10.462781 | orchestrator | 22:06:10.462 STDOUT terraform:  + most_recent = true 2025-06-01 22:06:10.462860 | orchestrator | 22:06:10.462 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:06:10.462940 | orchestrator | 22:06:10.462 STDOUT terraform:  + protected = (known after apply) 2025-06-01 22:06:10.463021 | orchestrator | 22:06:10.462 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.463116 | orchestrator | 22:06:10.463 STDOUT terraform:  + schema = (known after apply) 2025-06-01 22:06:10.463204 | orchestrator | 22:06:10.463 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-01 22:06:10.463288 | orchestrator | 22:06:10.463 STDOUT terraform:  + tags = (known after apply) 2025-06-01 22:06:10.463384 | orchestrator | 22:06:10.463 STDOUT terraform:  + updated_at = (known after apply) 2025-06-01 22:06:10.463429 | orchestrator | 22:06:10.463 STDOUT terraform:  } 2025-06-01 22:06:10.463513 | orchestrator | 22:06:10.463 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-01 22:06:10.463599 | orchestrator | 22:06:10.463 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-01 22:06:10.463707 | orchestrator | 22:06:10.463 STDOUT terraform:  + content = (known after apply) 2025-06-01 22:06:10.463804 | orchestrator | 22:06:10.463 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-01 22:06:10.463902 | orchestrator | 22:06:10.463 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-01 22:06:10.463999 | orchestrator | 22:06:10.463 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-01 22:06:10.464121 | orchestrator | 22:06:10.464 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-01 22:06:10.464219 | orchestrator | 22:06:10.464 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-01 22:06:10.464314 | orchestrator | 22:06:10.464 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-01 22:06:10.464382 | orchestrator | 22:06:10.464 STDOUT terraform:  + directory_permission = "0777" 2025-06-01 22:06:10.464451 | orchestrator | 22:06:10.464 STDOUT terraform:  + file_permission = "0644" 2025-06-01 22:06:10.464557 | orchestrator | 22:06:10.464 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-01 22:06:10.464663 | orchestrator | 22:06:10.464 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.464697 | orchestrator | 22:06:10.464 STDOUT terraform:  } 2025-06-01 22:06:10.464773 | orchestrator | 22:06:10.464 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-01 22:06:10.464843 | orchestrator | 22:06:10.464 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-01 22:06:10.464944 | orchestrator | 22:06:10.464 STDOUT terraform:  + content = (known after apply) 2025-06-01 22:06:10.465040 | orchestrator | 22:06:10.464 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-01 22:06:10.465188 | orchestrator | 22:06:10.465 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-01 22:06:10.465284 | orchestrator | 22:06:10.465 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-01 22:06:10.465380 | orchestrator | 22:06:10.465 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-01 22:06:10.465478 | orchestrator | 22:06:10.465 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-01 22:06:10.465586 | orchestrator | 22:06:10.465 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-01 22:06:10.465633 | orchestrator | 22:06:10.465 STDOUT terraform:  + directory_permission = "0777" 2025-06-01 22:06:10.465693 | orchestrator | 22:06:10.465 STDOUT terraform:  + file_permission = "0644" 2025-06-01 22:06:10.465770 | orchestrator | 22:06:10.465 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-01 22:06:10.465859 | orchestrator | 22:06:10.465 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.465895 | orchestrator | 22:06:10.465 STDOUT terraform:  } 2025-06-01 22:06:10.465954 | orchestrator | 22:06:10.465 STDOUT terraform:  # local_file.inventory will be created 2025-06-01 22:06:10.466031 | orchestrator | 22:06:10.465 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-01 22:06:10.466163 | orchestrator | 22:06:10.466 STDOUT terraform:  + content = (known after apply) 2025-06-01 22:06:10.466242 | orchestrator | 22:06:10.466 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-01 22:06:10.466314 | orchestrator | 22:06:10.466 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-01 22:06:10.466385 | orchestrator | 22:06:10.466 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-01 22:06:10.466461 | orchestrator | 22:06:10.466 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-01 22:06:10.466529 | orchestrator | 22:06:10.466 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-01 22:06:10.466602 | orchestrator | 22:06:10.466 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-01 22:06:10.466652 | orchestrator | 22:06:10.466 STDOUT terraform:  + directory_permission = "0777" 2025-06-01 22:06:10.466699 | orchestrator | 22:06:10.466 STDOUT terraform:  + file_permission = "0644" 2025-06-01 22:06:10.466761 | orchestrator | 22:06:10.466 STDOUT terraform:  + filename = "inventory.ci" 2025-06-01 22:06:10.466875 | orchestrator | 22:06:10.466 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.466885 | orchestrator | 22:06:10.466 STDOUT terraform:  } 2025-06-01 22:06:10.466920 | orchestrator | 22:06:10.466 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-01 22:06:10.466974 | orchestrator | 22:06:10.466 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-01 22:06:10.467039 | orchestrator | 22:06:10.466 STDOUT terraform:  + content = (sensitive value) 2025-06-01 22:06:10.467125 | orchestrator | 22:06:10.467 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-01 22:06:10.467195 | orchestrator | 22:06:10.467 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-01 22:06:10.467266 | orchestrator | 22:06:10.467 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-01 22:06:10.467338 | orchestrator | 22:06:10.467 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-01 22:06:10.467415 | orchestrator | 22:06:10.467 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-01 22:06:10.467487 | orchestrator | 22:06:10.467 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-01 22:06:10.467536 | orchestrator | 22:06:10.467 STDOUT terraform:  + directory_permission = "0700" 2025-06-01 22:06:10.467586 | orchestrator | 22:06:10.467 STDOUT terraform:  + file_permission = "0600" 2025-06-01 22:06:10.467645 | orchestrator | 22:06:10.467 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-01 22:06:10.467719 | orchestrator | 22:06:10.467 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.467749 | orchestrator | 22:06:10.467 STDOUT terraform:  } 2025-06-01 22:06:10.467810 | orchestrator | 22:06:10.467 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-01 22:06:10.467878 | orchestrator | 22:06:10.467 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-01 22:06:10.467911 | orchestrator | 22:06:10.467 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.467939 | orchestrator | 22:06:10.467 STDOUT terraform:  } 2025-06-01 22:06:10.468037 | orchestrator | 22:06:10.467 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-01 22:06:10.468148 | orchestrator | 22:06:10.468 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-01 22:06:10.468219 | orchestrator | 22:06:10.468 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.468269 | orchestrator | 22:06:10.468 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.468380 | orchestrator | 22:06:10.468 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.468506 | orchestrator | 22:06:10.468 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.468580 | orchestrator | 22:06:10.468 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.468674 | orchestrator | 22:06:10.468 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-01 22:06:10.468748 | orchestrator | 22:06:10.468 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.468788 | orchestrator | 22:06:10.468 STDOUT terraform:  + size = 80 2025-06-01 22:06:10.468838 | orchestrator | 22:06:10.468 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.468887 | orchestrator | 22:06:10.468 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.468922 | orchestrator | 22:06:10.468 STDOUT terraform:  } 2025-06-01 22:06:10.469014 | orchestrator | 22:06:10.468 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-01 22:06:10.469147 | orchestrator | 22:06:10.469 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 22:06:10.469221 | orchestrator | 22:06:10.469 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.469272 | orchestrator | 22:06:10.469 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.469336 | orchestrator | 22:06:10.469 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.469397 | orchestrator | 22:06:10.469 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.469460 | orchestrator | 22:06:10.469 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.469538 | orchestrator | 22:06:10.469 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-01 22:06:10.469608 | orchestrator | 22:06:10.469 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.469637 | orchestrator | 22:06:10.469 STDOUT terraform:  + size = 80 2025-06-01 22:06:10.469679 | orchestrator | 22:06:10.469 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.469721 | orchestrator | 22:06:10.469 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.469745 | orchestrator | 22:06:10.469 STDOUT terraform:  } 2025-06-01 22:06:10.469944 | orchestrator | 22:06:10.469 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-01 22:06:10.474199 | orchestrator | 22:06:10.469 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 22:06:10.474289 | orchestrator | 22:06:10.470 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.474305 | orchestrator | 22:06:10.470 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.474317 | orchestrator | 22:06:10.470 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.474328 | orchestrator | 22:06:10.470 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.474355 | orchestrator | 22:06:10.470 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.474367 | orchestrator | 22:06:10.470 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-01 22:06:10.474379 | orchestrator | 22:06:10.470 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.474390 | orchestrator | 22:06:10.470 STDOUT terraform:  + size = 80 2025-06-01 22:06:10.474401 | orchestrator | 22:06:10.470 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.474411 | orchestrator | 22:06:10.470 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.474422 | orchestrator | 22:06:10.470 STDOUT terraform:  } 2025-06-01 22:06:10.474433 | orchestrator | 22:06:10.470 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-01 22:06:10.474444 | orchestrator | 22:06:10.470 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 22:06:10.474455 | orchestrator | 22:06:10.470 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.474471 | orchestrator | 22:06:10.470 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.474490 | orchestrator | 22:06:10.470 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.474508 | orchestrator | 22:06:10.470 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.474527 | orchestrator | 22:06:10.470 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.474546 | orchestrator | 22:06:10.470 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-01 22:06:10.474560 | orchestrator | 22:06:10.470 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.474571 | orchestrator | 22:06:10.470 STDOUT terraform:  + size = 80 2025-06-01 22:06:10.474582 | orchestrator | 22:06:10.470 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.474594 | orchestrator | 22:06:10.470 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.474604 | orchestrator | 22:06:10.470 STDOUT terraform:  } 2025-06-01 22:06:10.474615 | orchestrator | 22:06:10.470 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-01 22:06:10.474626 | orchestrator | 22:06:10.470 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 22:06:10.474636 | orchestrator | 22:06:10.470 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.474647 | orchestrator | 22:06:10.470 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.474658 | orchestrator | 22:06:10.470 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.474668 | orchestrator | 22:06:10.471 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.474679 | orchestrator | 22:06:10.471 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.474690 | orchestrator | 22:06:10.471 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-01 22:06:10.474700 | orchestrator | 22:06:10.471 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.474718 | orchestrator | 22:06:10.471 STDOUT terraform:  + size = 80 2025-06-01 22:06:10.474729 | orchestrator | 22:06:10.471 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.474754 | orchestrator | 22:06:10.471 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.474766 | orchestrator | 22:06:10.471 STDOUT terraform:  } 2025-06-01 22:06:10.474777 | orchestrator | 22:06:10.471 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-01 22:06:10.474788 | orchestrator | 22:06:10.471 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 22:06:10.474798 | orchestrator | 22:06:10.471 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.474809 | orchestrator | 22:06:10.471 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.474820 | orchestrator | 22:06:10.471 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.474830 | orchestrator | 22:06:10.471 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.474841 | orchestrator | 22:06:10.471 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.474852 | orchestrator | 22:06:10.471 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-01 22:06:10.474863 | orchestrator | 22:06:10.471 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.474874 | orchestrator | 22:06:10.471 STDOUT terraform:  + size = 80 2025-06-01 22:06:10.474890 | orchestrator | 22:06:10.471 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.474902 | orchestrator | 22:06:10.471 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.474912 | orchestrator | 22:06:10.471 STDOUT terraform:  } 2025-06-01 22:06:10.474923 | orchestrator | 22:06:10.471 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-01 22:06:10.474934 | orchestrator | 22:06:10.471 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 22:06:10.474945 | orchestrator | 22:06:10.471 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.474956 | orchestrator | 22:06:10.471 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.474967 | orchestrator | 22:06:10.471 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.474977 | orchestrator | 22:06:10.471 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.474988 | orchestrator | 22:06:10.471 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.474999 | orchestrator | 22:06:10.471 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-01 22:06:10.475009 | orchestrator | 22:06:10.471 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.475020 | orchestrator | 22:06:10.471 STDOUT terraform:  + size = 80 2025-06-01 22:06:10.475031 | orchestrator | 22:06:10.471 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.475042 | orchestrator | 22:06:10.471 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.475053 | orchestrator | 22:06:10.471 STDOUT terraform:  } 2025-06-01 22:06:10.475100 | orchestrator | 22:06:10.471 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-01 22:06:10.475122 | orchestrator | 22:06:10.472 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:06:10.475141 | orchestrator | 22:06:10.472 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.475193 | orchestrator | 22:06:10.472 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.475206 | orchestrator | 22:06:10.472 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.475217 | orchestrator | 22:06:10.472 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.475228 | orchestrator | 22:06:10.472 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-01 22:06:10.475238 | orchestrator | 22:06:10.472 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.475249 | orchestrator | 22:06:10.472 STDOUT terraform:  + size = 20 2025-06-01 22:06:10.475271 | orchestrator | 22:06:10.472 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.475283 | orchestrator | 22:06:10.472 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.475294 | orchestrator | 22:06:10.472 STDOUT terraform:  } 2025-06-01 22:06:10.475304 | orchestrator | 22:06:10.472 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-01 22:06:10.475321 | orchestrator | 22:06:10.472 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:06:10.475332 | orchestrator | 22:06:10.472 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.475343 | orchestrator | 22:06:10.472 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.475354 | orchestrator | 22:06:10.472 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.475364 | orchestrator | 22:06:10.472 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.475375 | orchestrator | 22:06:10.472 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-01 22:06:10.475386 | orchestrator | 22:06:10.472 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.475397 | orchestrator | 22:06:10.472 STDOUT terraform:  + size = 20 2025-06-01 22:06:10.475407 | orchestrator | 22:06:10.472 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.475422 | orchestrator | 22:06:10.472 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.475433 | orchestrator | 22:06:10.472 STDOUT terraform:  } 2025-06-01 22:06:10.475444 | orchestrator | 22:06:10.472 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-01 22:06:10.475454 | orchestrator | 22:06:10.472 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:06:10.475465 | orchestrator | 22:06:10.472 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.475476 | orchestrator | 22:06:10.472 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.475487 | orchestrator | 22:06:10.472 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.475506 | orchestrator | 22:06:10.472 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.475517 | orchestrator | 22:06:10.472 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-01 22:06:10.475528 | orchestrator | 22:06:10.472 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.475539 | orchestrator | 22:06:10.472 STDOUT terraform:  + size = 20 2025-06-01 22:06:10.475550 | orchestrator | 22:06:10.472 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.475561 | orchestrator | 22:06:10.472 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.475572 | orchestrator | 22:06:10.472 STDOUT terraform:  } 2025-06-01 22:06:10.475583 | orchestrator | 22:06:10.472 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-01 22:06:10.475594 | orchestrator | 22:06:10.473 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:06:10.475605 | orchestrator | 22:06:10.473 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.475616 | orchestrator | 22:06:10.473 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.475627 | orchestrator | 22:06:10.473 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.475638 | orchestrator | 22:06:10.473 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.475648 | orchestrator | 22:06:10.473 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-01 22:06:10.475659 | orchestrator | 22:06:10.473 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.475670 | orchestrator | 22:06:10.473 STDOUT terraform:  + size = 20 2025-06-01 22:06:10.475685 | orchestrator | 22:06:10.473 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.475697 | orchestrator | 22:06:10.473 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.475718 | orchestrator | 22:06:10.473 STDOUT terraform:  } 2025-06-01 22:06:10.475731 | orchestrator | 22:06:10.473 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-01 22:06:10.475742 | orchestrator | 22:06:10.473 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:06:10.475753 | orchestrator | 22:06:10.473 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.475763 | orchestrator | 22:06:10.473 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.475775 | orchestrator | 22:06:10.473 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.475785 | orchestrator | 22:06:10.473 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.475796 | orchestrator | 22:06:10.473 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-01 22:06:10.475807 | orchestrator | 22:06:10.473 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.475818 | orchestrator | 22:06:10.473 STDOUT terraform:  + size = 20 2025-06-01 22:06:10.475829 | orchestrator | 22:06:10.473 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.475840 | orchestrator | 22:06:10.473 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.475856 | orchestrator | 22:06:10.473 STDOUT terraform:  } 2025-06-01 22:06:10.475868 | orchestrator | 22:06:10.473 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-01 22:06:10.475878 | orchestrator | 22:06:10.473 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:06:10.475889 | orchestrator | 22:06:10.473 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.475900 | orchestrator | 22:06:10.473 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.475911 | orchestrator | 22:06:10.473 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.475922 | orchestrator | 22:06:10.473 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.475932 | orchestrator | 22:06:10.473 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-01 22:06:10.475943 | orchestrator | 22:06:10.473 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.475954 | orchestrator | 22:06:10.473 STDOUT terraform:  + size = 20 2025-06-01 22:06:10.475965 | orchestrator | 22:06:10.473 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.475976 | orchestrator | 22:06:10.473 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.475987 | orchestrator | 22:06:10.474 STDOUT terraform:  } 2025-06-01 22:06:10.475998 | orchestrator | 22:06:10.474 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-01 22:06:10.476009 | orchestrator | 22:06:10.474 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:06:10.476019 | orchestrator | 22:06:10.474 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.476030 | orchestrator | 22:06:10.474 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.476041 | orchestrator | 22:06:10.474 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.476052 | orchestrator | 22:06:10.474 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.476062 | orchestrator | 22:06:10.474 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-01 22:06:10.476110 | orchestrator | 22:06:10.474 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.476124 | orchestrator | 22:06:10.474 STDOUT terraform:  + size = 20 2025-06-01 22:06:10.476140 | orchestrator | 22:06:10.474 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.476151 | orchestrator | 22:06:10.474 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.476162 | orchestrator | 22:06:10.474 STDOUT terraform:  } 2025-06-01 22:06:10.476180 | orchestrator | 22:06:10.474 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-01 22:06:10.476192 | orchestrator | 22:06:10.474 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:06:10.476202 | orchestrator | 22:06:10.474 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.476213 | orchestrator | 22:06:10.474 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.476224 | orchestrator | 22:06:10.474 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.476247 | orchestrator | 22:06:10.474 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.476258 | orchestrator | 22:06:10.474 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-01 22:06:10.476269 | orchestrator | 22:06:10.474 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.476280 | orchestrator | 22:06:10.474 STDOUT terraform:  + size = 20 2025-06-01 22:06:10.476291 | orchestrator | 22:06:10.474 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.476301 | orchestrator | 22:06:10.474 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.476312 | orchestrator | 22:06:10.474 STDOUT terraform:  } 2025-06-01 22:06:10.476323 | orchestrator | 22:06:10.474 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-01 22:06:10.476333 | orchestrator | 22:06:10.474 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:06:10.476344 | orchestrator | 22:06:10.474 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:06:10.476355 | orchestrator | 22:06:10.474 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.476365 | orchestrator | 22:06:10.474 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.476376 | orchestrator | 22:06:10.474 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:06:10.476387 | orchestrator | 22:06:10.474 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-01 22:06:10.476397 | orchestrator | 22:06:10.474 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.476408 | orchestrator | 22:06:10.474 STDOUT terraform:  + size = 20 2025-06-01 22:06:10.476419 | orchestrator | 22:06:10.474 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:06:10.476429 | orchestrator | 22:06:10.475 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:06:10.476440 | orchestrator | 22:06:10.475 STDOUT terraform:  } 2025-06-01 22:06:10.476451 | orchestrator | 22:06:10.475 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-01 22:06:10.476462 | orchestrator | 22:06:10.475 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-01 22:06:10.476472 | orchestrator | 22:06:10.475 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:06:10.476483 | orchestrator | 22:06:10.475 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:06:10.476494 | orchestrator | 22:06:10.475 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:06:10.476505 | orchestrator | 22:06:10.475 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.476516 | orchestrator | 22:06:10.475 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.476526 | orchestrator | 22:06:10.475 STDOUT terraform:  + config_drive = true 2025-06-01 22:06:10.476537 | orchestrator | 22:06:10.475 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:06:10.476548 | orchestrator | 22:06:10.475 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:06:10.476565 | orchestrator | 22:06:10.475 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-01 22:06:10.476575 | orchestrator | 22:06:10.475 STDOUT terraform:  + force_delete = false 2025-06-01 22:06:10.476586 | orchestrator | 22:06:10.475 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:06:10.476602 | orchestrator | 22:06:10.475 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.476618 | orchestrator | 22:06:10.475 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.476629 | orchestrator | 22:06:10.475 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:06:10.476640 | orchestrator | 22:06:10.475 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:06:10.476651 | orchestrator | 22:06:10.475 STDOUT terraform:  + name = "testbed-manager" 2025-06-01 22:06:10.476662 | orchestrator | 22:06:10.475 STDOUT terraform:  + power_state = "active" 2025-06-01 22:06:10.476672 | orchestrator | 22:06:10.475 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.476683 | orchestrator | 22:06:10.475 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:06:10.476694 | orchestrator | 22:06:10.475 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:06:10.476704 | orchestrator | 22:06:10.475 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:06:10.476715 | orchestrator | 22:06:10.475 STDOUT terraform:  + user_data = (known after apply) 2025-06-01 22:06:10.476726 | orchestrator | 22:06:10.475 STDOUT terraform:  + block_device { 2025-06-01 22:06:10.476737 | orchestrator | 22:06:10.475 STDOUT terraform:  + boot_index = 0 2025-06-01 22:06:10.476748 | orchestrator | 22:06:10.475 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:06:10.476758 | orchestrator | 22:06:10.475 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:06:10.476769 | orchestrator | 22:06:10.475 STDOUT terraform:  + multiattach = false 2025-06-01 22:06:10.476780 | orchestrator | 22:06:10.475 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:06:10.476791 | orchestrator | 22:06:10.475 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.476802 | orchestrator | 22:06:10.475 STDOUT terraform:  } 2025-06-01 22:06:10.476812 | orchestrator | 22:06:10.475 STDOUT terraform:  + network { 2025-06-01 22:06:10.476823 | orchestrator | 22:06:10.475 STDOUT terraform:  + access_network = false 2025-06-01 22:06:10.476834 | orchestrator | 22:06:10.475 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:06:10.476845 | orchestrator | 22:06:10.475 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:06:10.476856 | orchestrator | 22:06:10.475 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:06:10.476866 | orchestrator | 22:06:10.475 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:06:10.476877 | orchestrator | 22:06:10.476 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:06:10.476888 | orchestrator | 22:06:10.476 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.476905 | orchestrator | 22:06:10.476 STDOUT terraform:  } 2025-06-01 22:06:10.476916 | orchestrator | 22:06:10.476 STDOUT terraform:  } 2025-06-01 22:06:10.476928 | orchestrator | 22:06:10.476 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-01 22:06:10.476948 | orchestrator | 22:06:10.476 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 22:06:10.476960 | orchestrator | 22:06:10.476 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:06:10.476970 | orchestrator | 22:06:10.476 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:06:10.476981 | orchestrator | 22:06:10.476 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:06:10.476992 | orchestrator | 22:06:10.476 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.477003 | orchestrator | 22:06:10.476 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.477014 | orchestrator | 22:06:10.476 STDOUT terraform:  + config_drive = true 2025-06-01 22:06:10.477025 | orchestrator | 22:06:10.476 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:06:10.477035 | orchestrator | 22:06:10.476 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:06:10.477053 | orchestrator | 22:06:10.476 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 22:06:10.477064 | orchestrator | 22:06:10.476 STDOUT terraform:  + force_delete = false 2025-06-01 22:06:10.477098 | orchestrator | 22:06:10.476 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:06:10.477110 | orchestrator | 22:06:10.476 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.477121 | orchestrator | 22:06:10.476 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.477132 | orchestrator | 22:06:10.476 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:06:10.477143 | orchestrator | 22:06:10.476 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:06:10.477154 | orchestrator | 22:06:10.476 STDOUT terraform:  + name = "testbed-node-0" 2025-06-01 22:06:10.477165 | orchestrator | 22:06:10.476 STDOUT terraform:  + power_state = "active" 2025-06-01 22:06:10.477185 | orchestrator | 22:06:10.476 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.477196 | orchestrator | 22:06:10.476 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:06:10.477207 | orchestrator | 22:06:10.476 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:06:10.477218 | orchestrator | 22:06:10.476 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:06:10.477229 | orchestrator | 22:06:10.476 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 22:06:10.477240 | orchestrator | 22:06:10.476 STDOUT terraform:  + block_device { 2025-06-01 22:06:10.477251 | orchestrator | 22:06:10.476 STDOUT terraform:  + boot_index = 0 2025-06-01 22:06:10.477262 | orchestrator | 22:06:10.476 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:06:10.477272 | orchestrator | 22:06:10.476 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:06:10.477290 | orchestrator | 22:06:10.476 STDOUT terraform:  + multiattach = false 2025-06-01 22:06:10.477301 | orchestrator | 22:06:10.476 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:06:10.477312 | orchestrator | 22:06:10.476 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.477323 | orchestrator | 22:06:10.477 STDOUT terraform:  } 2025-06-01 22:06:10.477339 | orchestrator | 22:06:10.477 STDOUT terraform:  + network { 2025-06-01 22:06:10.477350 | orchestrator | 22:06:10.477 STDOUT terraform:  + access_network = false 2025-06-01 22:06:10.477361 | orchestrator | 22:06:10.477 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:06:10.477372 | orchestrator | 22:06:10.477 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:06:10.477382 | orchestrator | 22:06:10.477 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:06:10.477393 | orchestrator | 22:06:10.477 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:06:10.477404 | orchestrator | 22:06:10.477 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:06:10.477414 | orchestrator | 22:06:10.477 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.477425 | orchestrator | 22:06:10.477 STDOUT terraform:  } 2025-06-01 22:06:10.477436 | orchestrator | 22:06:10.477 STDOUT terraform:  } 2025-06-01 22:06:10.477447 | orchestrator | 22:06:10.477 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-01 22:06:10.477461 | orchestrator | 22:06:10.477 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 22:06:10.477473 | orchestrator | 22:06:10.477 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:06:10.477483 | orchestrator | 22:06:10.477 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:06:10.477494 | orchestrator | 22:06:10.477 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:06:10.477509 | orchestrator | 22:06:10.477 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.477521 | orchestrator | 22:06:10.477 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.477535 | orchestrator | 22:06:10.477 STDOUT terraform:  + config_drive = true 2025-06-01 22:06:10.477549 | orchestrator | 22:06:10.477 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:06:10.477614 | orchestrator | 22:06:10.477 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:06:10.477632 | orchestrator | 22:06:10.477 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 22:06:10.477646 | orchestrator | 22:06:10.477 STDOUT terraform:  + force_delete = false 2025-06-01 22:06:10.477677 | orchestrator | 22:06:10.477 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:06:10.477714 | orchestrator | 22:06:10.477 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.477753 | orchestrator | 22:06:10.477 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.477785 | orchestrator | 22:06:10.477 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:06:10.477808 | orchestrator | 22:06:10.477 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:06:10.477838 | orchestrator | 22:06:10.477 STDOUT terraform:  + name = "testbed-node-1" 2025-06-01 22:06:10.477853 | orchestrator | 22:06:10.477 STDOUT terraform:  + power_state = "active" 2025-06-01 22:06:10.477901 | orchestrator | 22:06:10.477 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.477922 | orchestrator | 22:06:10.477 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:06:10.477942 | orchestrator | 22:06:10.477 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:06:10.477984 | orchestrator | 22:06:10.477 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:06:10.478042 | orchestrator | 22:06:10.477 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 22:06:10.478061 | orchestrator | 22:06:10.478 STDOUT terraform:  + block_device { 2025-06-01 22:06:10.478102 | orchestrator | 22:06:10.478 STDOUT terraform:  + boot_index = 0 2025-06-01 22:06:10.478115 | orchestrator | 22:06:10.478 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:06:10.478129 | orchestrator | 22:06:10.478 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:06:10.478169 | orchestrator | 22:06:10.478 STDOUT terraform:  + multiattach = false 2025-06-01 22:06:10.478191 | orchestrator | 22:06:10.478 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:06:10.478233 | orchestrator | 22:06:10.478 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.478245 | orchestrator | 22:06:10.478 STDOUT terraform:  } 2025-06-01 22:06:10.478260 | orchestrator | 22:06:10.478 STDOUT terraform:  + network { 2025-06-01 22:06:10.478271 | orchestrator | 22:06:10.478 STDOUT terraform:  + access_network = false 2025-06-01 22:06:10.478285 | orchestrator | 22:06:10.478 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:06:10.478324 | orchestrator | 22:06:10.478 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:06:10.478340 | orchestrator | 22:06:10.478 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:06:10.478378 | orchestrator | 22:06:10.478 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:06:10.478394 | orchestrator | 22:06:10.478 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:06:10.478433 | orchestrator | 22:06:10.478 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.478445 | orchestrator | 22:06:10.478 STDOUT terraform:  } 2025-06-01 22:06:10.478459 | orchestrator | 22:06:10.478 STDOUT terraform:  } 2025-06-01 22:06:10.478499 | orchestrator | 22:06:10.478 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-01 22:06:10.478538 | orchestrator | 22:06:10.478 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 22:06:10.478554 | orchestrator | 22:06:10.478 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:06:10.478608 | orchestrator | 22:06:10.478 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:06:10.478633 | orchestrator | 22:06:10.478 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:06:10.478675 | orchestrator | 22:06:10.478 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.478687 | orchestrator | 22:06:10.478 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.478701 | orchestrator | 22:06:10.478 STDOUT terraform:  + config_drive = true 2025-06-01 22:06:10.478740 | orchestrator | 22:06:10.478 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:06:10.478756 | orchestrator | 22:06:10.478 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:06:10.478808 | orchestrator | 22:06:10.478 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 22:06:10.478824 | orchestrator | 22:06:10.478 STDOUT terraform:  + force_delete = false 2025-06-01 22:06:10.478863 | orchestrator | 22:06:10.478 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:06:10.478879 | orchestrator | 22:06:10.478 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.478930 | orchestrator | 22:06:10.478 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.478947 | orchestrator | 22:06:10.478 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:06:10.478986 | orchestrator | 22:06:10.478 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:06:10.479001 | orchestrator | 22:06:10.478 STDOUT terraform:  + name = "testbed-node-2" 2025-06-01 22:06:10.479015 | orchestrator | 22:06:10.478 STDOUT terraform:  + power_state = "active" 2025-06-01 22:06:10.479067 | orchestrator | 22:06:10.479 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.479120 | orchestrator | 22:06:10.479 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:06:10.479132 | orchestrator | 22:06:10.479 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:06:10.479159 | orchestrator | 22:06:10.479 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:06:10.479277 | orchestrator | 22:06:10.479 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 22:06:10.479294 | orchestrator | 22:06:10.479 STDOUT terraform:  + block_device { 2025-06-01 22:06:10.479305 | orchestrator | 22:06:10.479 STDOUT terraform:  + boot_index = 0 2025-06-01 22:06:10.479316 | orchestrator | 22:06:10.479 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:06:10.479331 | orchestrator | 22:06:10.479 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:06:10.479342 | orchestrator | 22:06:10.479 STDOUT terraform:  + multiattach = false 2025-06-01 22:06:10.479356 | orchestrator | 22:06:10.479 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:06:10.479396 | orchestrator | 22:06:10.479 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.479409 | orchestrator | 22:06:10.479 STDOUT terraform:  } 2025-06-01 22:06:10.479423 | orchestrator | 22:06:10.479 STDOUT terraform:  + network { 2025-06-01 22:06:10.479434 | orchestrator | 22:06:10.479 STDOUT terraform:  + access_network = false 2025-06-01 22:06:10.479457 | orchestrator | 22:06:10.479 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:06:10.479472 | orchestrator | 22:06:10.479 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:06:10.479523 | orchestrator | 22:06:10.479 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:06:10.479539 | orchestrator | 22:06:10.479 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:06:10.479577 | orchestrator | 22:06:10.479 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:06:10.479593 | orchestrator | 22:06:10.479 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.479608 | orchestrator | 22:06:10.479 STDOUT terraform:  } 2025-06-01 22:06:10.479622 | orchestrator | 22:06:10.479 STDOUT terraform:  } 2025-06-01 22:06:10.479674 | orchestrator | 22:06:10.479 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-01 22:06:10.479714 | orchestrator | 22:06:10.479 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 22:06:10.479728 | orchestrator | 22:06:10.479 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:06:10.479780 | orchestrator | 22:06:10.479 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:06:10.479801 | orchestrator | 22:06:10.479 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:06:10.479837 | orchestrator | 22:06:10.479 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.479852 | orchestrator | 22:06:10.479 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.479865 | orchestrator | 22:06:10.479 STDOUT terraform:  + config_drive = true 2025-06-01 22:06:10.479913 | orchestrator | 22:06:10.479 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:06:10.479929 | orchestrator | 22:06:10.479 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:06:10.479974 | orchestrator | 22:06:10.479 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 22:06:10.479989 | orchestrator | 22:06:10.479 STDOUT terraform:  + force_delete = false 2025-06-01 22:06:10.480024 | orchestrator | 22:06:10.479 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:06:10.480059 | orchestrator | 22:06:10.480 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.480117 | orchestrator | 22:06:10.480 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.480142 | orchestrator | 22:06:10.480 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:06:10.480161 | orchestrator | 22:06:10.480 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:06:10.480208 | orchestrator | 22:06:10.480 STDOUT terraform:  + name = "testbed-node-3" 2025-06-01 22:06:10.480223 | orchestrator | 22:06:10.480 STDOUT terraform:  + power_state = "active" 2025-06-01 22:06:10.480257 | orchestrator | 22:06:10.480 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.480280 | orchestrator | 22:06:10.480 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:06:10.480325 | orchestrator | 22:06:10.480 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:06:10.480353 | orchestrator | 22:06:10.480 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:06:10.480391 | orchestrator | 22:06:10.480 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 22:06:10.480405 | orchestrator | 22:06:10.480 STDOUT terraform:  + block_device { 2025-06-01 22:06:10.480418 | orchestrator | 22:06:10.480 STDOUT terraform:  + boot_index = 0 2025-06-01 22:06:10.480453 | orchestrator | 22:06:10.480 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:06:10.480466 | orchestrator | 22:06:10.480 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:06:10.480500 | orchestrator | 22:06:10.480 STDOUT terraform:  + multiattach = false 2025-06-01 22:06:10.480546 | orchestrator | 22:06:10.480 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:06:10.480561 | orchestrator | 22:06:10.480 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.480573 | orchestrator | 22:06:10.480 STDOUT terraform:  } 2025-06-01 22:06:10.480586 | orchestrator | 22:06:10.480 STDOUT terraform:  + network { 2025-06-01 22:06:10.480598 | orchestrator | 22:06:10.480 STDOUT terraform:  + access_network = false 2025-06-01 22:06:10.480632 | orchestrator | 22:06:10.480 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:06:10.480678 | orchestrator | 22:06:10.480 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:06:10.480692 | orchestrator | 22:06:10.480 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:06:10.480726 | orchestrator | 22:06:10.480 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:06:10.480739 | orchestrator | 22:06:10.480 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:06:10.480785 | orchestrator | 22:06:10.480 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.480797 | orchestrator | 22:06:10.480 STDOUT terraform:  } 2025-06-01 22:06:10.480809 | orchestrator | 22:06:10.480 STDOUT terraform:  } 2025-06-01 22:06:10.480844 | orchestrator | 22:06:10.480 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-01 22:06:10.480879 | orchestrator | 22:06:10.480 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 22:06:10.480915 | orchestrator | 22:06:10.480 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:06:10.480950 | orchestrator | 22:06:10.480 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:06:10.480964 | orchestrator | 22:06:10.480 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:06:10.481015 | orchestrator | 22:06:10.480 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.481030 | orchestrator | 22:06:10.480 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.481042 | orchestrator | 22:06:10.481 STDOUT terraform:  + config_drive = true 2025-06-01 22:06:10.481108 | orchestrator | 22:06:10.481 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:06:10.481124 | orchestrator | 22:06:10.481 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:06:10.481145 | orchestrator | 22:06:10.481 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 22:06:10.481169 | orchestrator | 22:06:10.481 STDOUT terraform:  + force_delete = false 2025-06-01 22:06:10.481206 | orchestrator | 22:06:10.481 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:06:10.481241 | orchestrator | 22:06:10.481 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.481286 | orchestrator | 22:06:10.481 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.481300 | orchestrator | 22:06:10.481 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:06:10.481344 | orchestrator | 22:06:10.481 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:06:10.481359 | orchestrator | 22:06:10.481 STDOUT terraform:  + name = "testbed-node-4" 2025-06-01 22:06:10.481372 | orchestrator | 22:06:10.481 STDOUT terraform:  + power_state = "active" 2025-06-01 22:06:10.481421 | orchestrator | 22:06:10.481 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.481436 | orchestrator | 22:06:10.481 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:06:10.481470 | orchestrator | 22:06:10.481 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:06:10.481505 | orchestrator | 22:06:10.481 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:06:10.481553 | orchestrator | 22:06:10.481 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 22:06:10.481565 | orchestrator | 22:06:10.481 STDOUT terraform:  + block_device { 2025-06-01 22:06:10.481578 | orchestrator | 22:06:10.481 STDOUT terraform:  + boot_index = 0 2025-06-01 22:06:10.481612 | orchestrator | 22:06:10.481 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:06:10.481626 | orchestrator | 22:06:10.481 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:06:10.481669 | orchestrator | 22:06:10.481 STDOUT terraform:  + multiattach = false 2025-06-01 22:06:10.481684 | orchestrator | 22:06:10.481 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:06:10.481720 | orchestrator | 22:06:10.481 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.481732 | orchestrator | 22:06:10.481 STDOUT terraform:  } 2025-06-01 22:06:10.481745 | orchestrator | 22:06:10.481 STDOUT terraform:  + network { 2025-06-01 22:06:10.481755 | orchestrator | 22:06:10.481 STDOUT terraform:  + access_network = false 2025-06-01 22:06:10.481801 | orchestrator | 22:06:10.481 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:06:10.481816 | orchestrator | 22:06:10.481 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:06:10.481862 | orchestrator | 22:06:10.481 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:06:10.481877 | orchestrator | 22:06:10.481 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:06:10.481921 | orchestrator | 22:06:10.481 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:06:10.481936 | orchestrator | 22:06:10.481 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.481956 | orchestrator | 22:06:10.481 STDOUT terraform:  } 2025-06-01 22:06:10.481966 | orchestrator | 22:06:10.481 STDOUT terraform:  } 2025-06-01 22:06:10.482013 | orchestrator | 22:06:10.481 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-01 22:06:10.482062 | orchestrator | 22:06:10.481 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 22:06:10.482128 | orchestrator | 22:06:10.482 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:06:10.482144 | orchestrator | 22:06:10.482 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:06:10.482343 | orchestrator | 22:06:10.482 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:06:10.482423 | orchestrator | 22:06:10.482 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.482439 | orchestrator | 22:06:10.482 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:06:10.482452 | orchestrator | 22:06:10.482 STDOUT terraform:  + config_drive = true 2025-06-01 22:06:10.482464 | orchestrator | 22:06:10.482 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:06:10.482489 | orchestrator | 22:06:10.482 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:06:10.482509 | orchestrator | 22:06:10.482 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 22:06:10.482528 | orchestrator | 22:06:10.482 STDOUT terraform:  + force_delete = false 2025-06-01 22:06:10.482546 | orchestrator | 22:06:10.482 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:06:10.482564 | orchestrator | 22:06:10.482 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.482601 | orchestrator | 22:06:10.482 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:06:10.482629 | orchestrator | 22:06:10.482 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:06:10.482648 | orchestrator | 22:06:10.482 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:06:10.482669 | orchestrator | 22:06:10.482 STDOUT terraform:  + name = "testbed-node-5" 2025-06-01 22:06:10.482688 | orchestrator | 22:06:10.482 STDOUT terraform:  + power_state = "active" 2025-06-01 22:06:10.482706 | orchestrator | 22:06:10.482 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.482725 | orchestrator | 22:06:10.482 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:06:10.482749 | orchestrator | 22:06:10.482 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:06:10.482769 | orchestrator | 22:06:10.482 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:06:10.482788 | orchestrator | 22:06:10.482 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 22:06:10.482808 | orchestrator | 22:06:10.482 STDOUT terraform:  + block_device { 2025-06-01 22:06:10.482833 | orchestrator | 22:06:10.482 STDOUT terraform:  + boot_index = 0 2025-06-01 22:06:10.482853 | orchestrator | 22:06:10.482 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:06:10.482871 | orchestrator | 22:06:10.482 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:06:10.482912 | orchestrator | 22:06:10.482 STDOUT terraform:  + multiattach = false 2025-06-01 22:06:10.482937 | orchestrator | 22:06:10.482 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:06:10.482956 | orchestrator | 22:06:10.482 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.482976 | orchestrator | 22:06:10.482 STDOUT terraform:  } 2025-06-01 22:06:10.482992 | orchestrator | 22:06:10.482 STDOUT terraform:  + network { 2025-06-01 22:06:10.483003 | orchestrator | 22:06:10.482 STDOUT terraform:  + access_network = false 2025-06-01 22:06:10.483018 | orchestrator | 22:06:10.482 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:06:10.483029 | orchestrator | 22:06:10.482 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:06:10.483039 | orchestrator | 22:06:10.482 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:06:10.483054 | orchestrator | 22:06:10.483 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:06:10.483129 | orchestrator | 22:06:10.483 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:06:10.483143 | orchestrator | 22:06:10.483 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:06:10.483154 | orchestrator | 22:06:10.483 STDOUT terraform:  } 2025-06-01 22:06:10.483170 | orchestrator | 22:06:10.483 STDOUT terraform:  } 2025-06-01 22:06:10.483181 | orchestrator | 22:06:10.483 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-01 22:06:10.483195 | orchestrator | 22:06:10.483 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-01 22:06:10.483246 | orchestrator | 22:06:10.483 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-01 22:06:10.483259 | orchestrator | 22:06:10.483 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.483274 | orchestrator | 22:06:10.483 STDOUT terraform:  + name = "testbed" 2025-06-01 22:06:10.483288 | orchestrator | 22:06:10.483 STDOUT terraform:  + private_key = (sensitive value) 2025-06-01 22:06:10.483341 | orchestrator | 22:06:10.483 STDOUT terraform:  + public_key = (known after apply) 2025-06-01 22:06:10.483365 | orchestrator | 22:06:10.483 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.483390 | orchestrator | 22:06:10.483 STDOUT terraform:  + user_id = (known after apply) 2025-06-01 22:06:10.483407 | orchestrator | 22:06:10.483 STDOUT terraform:  } 2025-06-01 22:06:10.483428 | orchestrator | 22:06:10.483 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-01 22:06:10.483494 | orchestrator | 22:06:10.483 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:06:10.483524 | orchestrator | 22:06:10.483 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:06:10.483543 | orchestrator | 22:06:10.483 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.483566 | orchestrator | 22:06:10.483 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:06:10.483591 | orchestrator | 22:06:10.483 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.483607 | orchestrator | 22:06:10.483 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:06:10.483630 | orchestrator | 22:06:10.483 STDOUT terraform:  } 2025-06-01 22:06:10.483661 | orchestrator | 22:06:10.483 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-01 22:06:10.483732 | orchestrator | 22:06:10.483 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:06:10.483750 | orchestrator | 22:06:10.483 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:06:10.483764 | orchestrator | 22:06:10.483 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.483778 | orchestrator | 22:06:10.483 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:06:10.483827 | orchestrator | 22:06:10.483 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.483852 | orchestrator | 22:06:10.483 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:06:10.483871 | orchestrator | 22:06:10.483 STDOUT terraform:  } 2025-06-01 22:06:10.483893 | orchestrator | 22:06:10.483 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-01 22:06:10.484017 | orchestrator | 22:06:10.483 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:06:10.484044 | orchestrator | 22:06:10.483 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:06:10.484063 | orchestrator | 22:06:10.483 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.484124 | orchestrator | 22:06:10.483 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:06:10.484151 | orchestrator | 22:06:10.484 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.484170 | orchestrator | 22:06:10.484 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:06:10.484190 | orchestrator | 22:06:10.484 STDOUT terraform:  } 2025-06-01 22:06:10.484208 | orchestrator | 22:06:10.484 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-01 22:06:10.484235 | orchestrator | 22:06:10.484 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:06:10.484255 | orchestrator | 22:06:10.484 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:06:10.484274 | orchestrator | 22:06:10.484 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.484299 | orchestrator | 22:06:10.484 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:06:10.484318 | orchestrator | 22:06:10.484 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.484342 | orchestrator | 22:06:10.484 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:06:10.484363 | orchestrator | 22:06:10.484 STDOUT terraform:  } 2025-06-01 22:06:10.484380 | orchestrator | 22:06:10.484 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-01 22:06:10.484428 | orchestrator | 22:06:10.484 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:06:10.484444 | orchestrator | 22:06:10.484 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:06:10.484483 | orchestrator | 22:06:10.484 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.484523 | orchestrator | 22:06:10.484 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:06:10.484542 | orchestrator | 22:06:10.484 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.484577 | orchestrator | 22:06:10.484 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:06:10.484597 | orchestrator | 22:06:10.484 STDOUT terraform:  } 2025-06-01 22:06:10.484621 | orchestrator | 22:06:10.484 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-01 22:06:10.484645 | orchestrator | 22:06:10.484 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:06:10.484669 | orchestrator | 22:06:10.484 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:06:10.484712 | orchestrator | 22:06:10.484 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.484738 | orchestrator | 22:06:10.484 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:06:10.484760 | orchestrator | 22:06:10.484 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.484783 | orchestrator | 22:06:10.484 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:06:10.484807 | orchestrator | 22:06:10.484 STDOUT terraform:  } 2025-06-01 22:06:10.484832 | orchestrator | 22:06:10.484 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-01 22:06:10.484913 | orchestrator | 22:06:10.484 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:06:10.484937 | orchestrator | 22:06:10.484 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:06:10.484965 | orchestrator | 22:06:10.484 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.484991 | orchestrator | 22:06:10.484 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:06:10.485017 | orchestrator | 22:06:10.484 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.485042 | orchestrator | 22:06:10.485 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:06:10.485061 | orchestrator | 22:06:10.485 STDOUT terraform:  } 2025-06-01 22:06:10.485111 | orchestrator | 22:06:10.485 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-01 22:06:10.485183 | orchestrator | 22:06:10.485 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:06:10.485201 | orchestrator | 22:06:10.485 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:06:10.485226 | orchestrator | 22:06:10.485 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.485265 | orchestrator | 22:06:10.485 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:06:10.485281 | orchestrator | 22:06:10.485 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.485304 | orchestrator | 22:06:10.485 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:06:10.485329 | orchestrator | 22:06:10.485 STDOUT terraform:  } 2025-06-01 22:06:10.485390 | orchestrator | 22:06:10.485 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-01 22:06:10.485419 | orchestrator | 22:06:10.485 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:06:10.485471 | orchestrator | 22:06:10.485 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:06:10.485484 | orchestrator | 22:06:10.485 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.485499 | orchestrator | 22:06:10.485 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:06:10.485537 | orchestrator | 22:06:10.485 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.485553 | orchestrator | 22:06:10.485 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:06:10.485565 | orchestrator | 22:06:10.485 STDOUT terraform:  } 2025-06-01 22:06:10.485625 | orchestrator | 22:06:10.485 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-01 22:06:10.485681 | orchestrator | 22:06:10.485 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-01 22:06:10.485698 | orchestrator | 22:06:10.485 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-01 22:06:10.485736 | orchestrator | 22:06:10.485 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-01 22:06:10.485752 | orchestrator | 22:06:10.485 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.485801 | orchestrator | 22:06:10.485 STDOUT terraform:  + port_id = (known after apply) 2025-06-01 22:06:10.485817 | orchestrator | 22:06:10.485 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.485828 | orchestrator | 22:06:10.485 STDOUT terraform:  } 2025-06-01 22:06:10.485879 | orchestrator | 22:06:10.485 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-01 22:06:10.485930 | orchestrator | 22:06:10.485 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-01 22:06:10.485947 | orchestrator | 22:06:10.485 STDOUT terraform:  + address = (known after apply) 2025-06-01 22:06:10.485961 | orchestrator | 22:06:10.485 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.485988 | orchestrator | 22:06:10.485 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-01 22:06:10.486051 | orchestrator | 22:06:10.485 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:06:10.486136 | orchestrator | 22:06:10.486 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-01 22:06:10.486157 | orchestrator | 22:06:10.486 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.486169 | orchestrator | 22:06:10.486 STDOUT terraform:  + pool = "public" 2025-06-01 22:06:10.486180 | orchestrator | 22:06:10.486 STDOUT terraform:  + port_id = (known after apply) 2025-06-01 22:06:10.486191 | orchestrator | 22:06:10.486 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.486215 | orchestrator | 22:06:10.486 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:06:10.486227 | orchestrator | 22:06:10.486 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.486238 | orchestrator | 22:06:10.486 STDOUT terraform:  } 2025-06-01 22:06:10.486252 | orchestrator | 22:06:10.486 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-01 22:06:10.486288 | orchestrator | 22:06:10.486 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-01 22:06:10.486334 | orchestrator | 22:06:10.486 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:06:10.486375 | orchestrator | 22:06:10.486 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.486392 | orchestrator | 22:06:10.486 STDOUT terraform:  + availability_zone_hints = [ 2025-06-01 22:06:10.486407 | orchestrator | 22:06:10.486 STDOUT terraform:  + "nova", 2025-06-01 22:06:10.486418 | orchestrator | 22:06:10.486 STDOUT terraform:  ] 2025-06-01 22:06:10.486455 | orchestrator | 22:06:10.486 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-01 22:06:10.486482 | orchestrator | 22:06:10.486 STDOUT terraform:  + external = (known after apply) 2025-06-01 22:06:10.486525 | orchestrator | 22:06:10.486 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.486570 | orchestrator | 22:06:10.486 STDOUT terraform:  + mtu = (known after apply) 2025-06-01 22:06:10.486609 | orchestrator | 22:06:10.486 STDOUT terraform:  + name = "net-testbed-management" 2025-06-01 22:06:10.486637 | orchestrator | 22:06:10.486 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:06:10.486680 | orchestrator | 22:06:10.486 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:06:10.486718 | orchestrator | 22:06:10.486 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.486762 | orchestrator | 22:06:10.486 STDOUT terraform:  + shared = (known after apply) 2025-06-01 22:06:10.486798 | orchestrator | 22:06:10.486 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.486839 | orchestrator | 22:06:10.486 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-01 22:06:10.486859 | orchestrator | 22:06:10.486 STDOUT terraform:  + segments (known after apply) 2025-06-01 22:06:10.486872 | orchestrator | 22:06:10.486 STDOUT terraform:  } 2025-06-01 22:06:10.486931 | orchestrator | 22:06:10.486 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-01 22:06:10.486979 | orchestrator | 22:06:10.486 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-01 22:06:10.487017 | orchestrator | 22:06:10.486 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:06:10.487053 | orchestrator | 22:06:10.487 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:06:10.487104 | orchestrator | 22:06:10.487 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:06:10.487142 | orchestrator | 22:06:10.487 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.487179 | orchestrator | 22:06:10.487 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:06:10.487221 | orchestrator | 22:06:10.487 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:06:10.487261 | orchestrator | 22:06:10.487 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:06:10.487298 | orchestrator | 22:06:10.487 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:06:10.487320 | orchestrator | 22:06:10.487 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.487367 | orchestrator | 22:06:10.487 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:06:10.487406 | orchestrator | 22:06:10.487 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:06:10.487443 | orchestrator | 22:06:10.487 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:06:10.487478 | orchestrator | 22:06:10.487 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:06:10.487515 | orchestrator | 22:06:10.487 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.487557 | orchestrator | 22:06:10.487 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:06:10.487593 | orchestrator | 22:06:10.487 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.487608 | orchestrator | 22:06:10.487 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.487643 | orchestrator | 22:06:10.487 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:06:10.487799 | orchestrator | 22:06:10.487 STDOUT terraform:  } 2025-06-01 22:06:10.487810 | orchestrator | 22:06:10.487 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.487820 | orchestrator | 22:06:10.487 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:06:10.487829 | orchestrator | 22:06:10.487 STDOUT terraform:  } 2025-06-01 22:06:10.487839 | orchestrator | 22:06:10.487 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:06:10.487849 | orchestrator | 22:06:10.487 STDOUT terraform:  + fixed_ip { 2025-06-01 22:06:10.487858 | orchestrator | 22:06:10.487 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-01 22:06:10.487871 | orchestrator | 22:06:10.487 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:06:10.487881 | orchestrator | 22:06:10.487 STDOUT terraform:  } 2025-06-01 22:06:10.487890 | orchestrator | 22:06:10.487 STDOUT terraform:  } 2025-06-01 22:06:10.487903 | orchestrator | 22:06:10.487 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-01 22:06:10.487916 | orchestrator | 22:06:10.487 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 22:06:10.487973 | orchestrator | 22:06:10.487 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:06:10.487989 | orchestrator | 22:06:10.487 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:06:10.488039 | orchestrator | 22:06:10.487 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:06:10.488091 | orchestrator | 22:06:10.488 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.488143 | orchestrator | 22:06:10.488 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:06:10.488190 | orchestrator | 22:06:10.488 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:06:10.488205 | orchestrator | 22:06:10.488 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:06:10.488249 | orchestrator | 22:06:10.488 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:06:10.488285 | orchestrator | 22:06:10.488 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.488320 | orchestrator | 22:06:10.488 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:06:10.488365 | orchestrator | 22:06:10.488 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:06:10.488379 | orchestrator | 22:06:10.488 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:06:10.488444 | orchestrator | 22:06:10.488 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:06:10.488502 | orchestrator | 22:06:10.488 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.488561 | orchestrator | 22:06:10.488 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:06:10.488620 | orchestrator | 22:06:10.488 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.488635 | orchestrator | 22:06:10.488 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.488692 | orchestrator | 22:06:10.488 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:06:10.488707 | orchestrator | 22:06:10.488 STDOUT terraform:  } 2025-06-01 22:06:10.488752 | orchestrator | 22:06:10.488 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.488787 | orchestrator | 22:06:10.488 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 22:06:10.488801 | orchestrator | 22:06:10.488 STDOUT terraform:  } 2025-06-01 22:06:10.488836 | orchestrator | 22:06:10.488 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.488889 | orchestrator | 22:06:10.488 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:06:10.488904 | orchestrator | 22:06:10.488 STDOUT terraform:  } 2025-06-01 22:06:10.488917 | orchestrator | 22:06:10.488 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.488952 | orchestrator | 22:06:10.488 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 22:06:10.488963 | orchestrator | 22:06:10.488 STDOUT terraform:  } 2025-06-01 22:06:10.488976 | orchestrator | 22:06:10.488 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:06:10.488989 | orchestrator | 22:06:10.488 STDOUT terraform:  + fixed_ip { 2025-06-01 22:06:10.489032 | orchestrator | 22:06:10.488 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-01 22:06:10.489047 | orchestrator | 22:06:10.489 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:06:10.489060 | orchestrator | 22:06:10.489 STDOUT terraform:  } 2025-06-01 22:06:10.489086 | orchestrator | 22:06:10.489 STDOUT terraform:  } 2025-06-01 22:06:10.489143 | orchestrator | 22:06:10.489 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-01 22:06:10.489193 | orchestrator | 22:06:10.489 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 22:06:10.489229 | orchestrator | 22:06:10.489 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:06:10.489264 | orchestrator | 22:06:10.489 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:06:10.489285 | orchestrator | 22:06:10.489 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:06:10.489344 | orchestrator | 22:06:10.489 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.489359 | orchestrator | 22:06:10.489 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:06:10.489406 | orchestrator | 22:06:10.489 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:06:10.489456 | orchestrator | 22:06:10.489 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:06:10.489471 | orchestrator | 22:06:10.489 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:06:10.489507 | orchestrator | 22:06:10.489 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.489542 | orchestrator | 22:06:10.489 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:06:10.489578 | orchestrator | 22:06:10.489 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:06:10.489593 | orchestrator | 22:06:10.489 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:06:10.489648 | orchestrator | 22:06:10.489 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:06:10.489694 | orchestrator | 22:06:10.489 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.489709 | orchestrator | 22:06:10.489 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:06:10.489775 | orchestrator | 22:06:10.489 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.489790 | orchestrator | 22:06:10.489 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.489803 | orchestrator | 22:06:10.489 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:06:10.489816 | orchestrator | 22:06:10.489 STDOUT terraform:  } 2025-06-01 22:06:10.489829 | orchestrator | 22:06:10.489 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.489880 | orchestrator | 22:06:10.489 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 22:06:10.489892 | orchestrator | 22:06:10.489 STDOUT terraform:  } 2025-06-01 22:06:10.489905 | orchestrator | 22:06:10.489 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.489941 | orchestrator | 22:06:10.489 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:06:10.489952 | orchestrator | 22:06:10.489 STDOUT terraform:  } 2025-06-01 22:06:10.489965 | orchestrator | 22:06:10.489 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.489978 | orchestrator | 22:06:10.489 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 22:06:10.489991 | orchestrator | 22:06:10.489 STDOUT terraform:  } 2025-06-01 22:06:10.490003 | orchestrator | 22:06:10.489 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:06:10.490042 | orchestrator | 22:06:10.489 STDOUT terraform:  + fixed_ip { 2025-06-01 22:06:10.490057 | orchestrator | 22:06:10.490 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-01 22:06:10.490104 | orchestrator | 22:06:10.490 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:06:10.490119 | orchestrator | 22:06:10.490 STDOUT terraform:  } 2025-06-01 22:06:10.490136 | orchestrator | 22:06:10.490 STDOUT terraform:  } 2025-06-01 22:06:10.490174 | orchestrator | 22:06:10.490 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-01 22:06:10.490281 | orchestrator | 22:06:10.490 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 22:06:10.490296 | orchestrator | 22:06:10.490 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:06:10.490309 | orchestrator | 22:06:10.490 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:06:10.490321 | orchestrator | 22:06:10.490 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:06:10.490371 | orchestrator | 22:06:10.490 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.490417 | orchestrator | 22:06:10.490 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:06:10.490431 | orchestrator | 22:06:10.490 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:06:10.490479 | orchestrator | 22:06:10.490 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:06:10.490526 | orchestrator | 22:06:10.490 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:06:10.490567 | orchestrator | 22:06:10.490 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.490580 | orchestrator | 22:06:10.490 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:06:10.490628 | orchestrator | 22:06:10.490 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:06:10.490659 | orchestrator | 22:06:10.490 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:06:10.490700 | orchestrator | 22:06:10.490 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:06:10.490740 | orchestrator | 22:06:10.490 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.490771 | orchestrator | 22:06:10.490 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:06:10.490815 | orchestrator | 22:06:10.490 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.490828 | orchestrator | 22:06:10.490 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.490865 | orchestrator | 22:06:10.490 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:06:10.490875 | orchestrator | 22:06:10.490 STDOUT terraform:  } 2025-06-01 22:06:10.490886 | orchestrator | 22:06:10.490 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.490915 | orchestrator | 22:06:10.490 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 22:06:10.490926 | orchestrator | 22:06:10.490 STDOUT terraform:  } 2025-06-01 22:06:10.490937 | orchestrator | 22:06:10.490 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.490966 | orchestrator | 22:06:10.490 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:06:10.490978 | orchestrator | 22:06:10.490 STDOUT terraform:  } 2025-06-01 22:06:10.490988 | orchestrator | 22:06:10.490 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.491027 | orchestrator | 22:06:10.490 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 22:06:10.491044 | orchestrator | 22:06:10.491 STDOUT terraform:  } 2025-06-01 22:06:10.491055 | orchestrator | 22:06:10.491 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:06:10.491063 | orchestrator | 22:06:10.491 STDOUT terraform:  + fixed_ip { 2025-06-01 22:06:10.491120 | orchestrator | 22:06:10.491 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-01 22:06:10.491133 | orchestrator | 22:06:10.491 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:06:10.491144 | orchestrator | 22:06:10.491 STDOUT terraform:  } 2025-06-01 22:06:10.491154 | orchestrator | 22:06:10.491 STDOUT terraform:  } 2025-06-01 22:06:10.491212 | orchestrator | 22:06:10.491 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-01 22:06:10.491270 | orchestrator | 22:06:10.491 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 22:06:10.491310 | orchestrator | 22:06:10.491 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:06:10.491349 | orchestrator | 22:06:10.491 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:06:10.491379 | orchestrator | 22:06:10.491 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:06:10.491417 | orchestrator | 22:06:10.491 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.491457 | orchestrator | 22:06:10.491 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:06:10.491495 | orchestrator | 22:06:10.491 STDOUT terraform:  + device_owner = (known after 2025-06-01 22:06:10.491576 | orchestrator | 22:06:10.491 STDOUT terraform:  apply) 2025-06-01 22:06:10.491615 | orchestrator | 22:06:10.491 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:06:10.491645 | orchestrator | 22:06:10.491 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:06:10.491686 | orchestrator | 22:06:10.491 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.491716 | orchestrator | 22:06:10.491 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:06:10.491759 | orchestrator | 22:06:10.491 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:06:10.491789 | orchestrator | 22:06:10.491 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:06:10.491828 | orchestrator | 22:06:10.491 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:06:10.491867 | orchestrator | 22:06:10.491 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.491897 | orchestrator | 22:06:10.491 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:06:10.491936 | orchestrator | 22:06:10.491 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.491948 | orchestrator | 22:06:10.491 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.491984 | orchestrator | 22:06:10.491 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:06:10.492004 | orchestrator | 22:06:10.491 STDOUT terraform:  } 2025-06-01 22:06:10.492015 | orchestrator | 22:06:10.491 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.492031 | orchestrator | 22:06:10.491 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 22:06:10.492040 | orchestrator | 22:06:10.492 STDOUT terraform:  } 2025-06-01 22:06:10.492062 | orchestrator | 22:06:10.492 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.492107 | orchestrator | 22:06:10.492 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:06:10.492116 | orchestrator | 22:06:10.492 STDOUT terraform:  } 2025-06-01 22:06:10.492127 | orchestrator | 22:06:10.492 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.492173 | orchestrator | 22:06:10.492 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 22:06:10.492183 | orchestrator | 22:06:10.492 STDOUT terraform:  } 2025-06-01 22:06:10.492193 | orchestrator | 22:06:10.492 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:06:10.492223 | orchestrator | 22:06:10.492 STDOUT terraform:  + fixed_ip { 2025-06-01 22:06:10.492234 | orchestrator | 22:06:10.492 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-01 22:06:10.492271 | orchestrator | 22:06:10.492 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:06:10.492280 | orchestrator | 22:06:10.492 STDOUT terraform:  } 2025-06-01 22:06:10.492291 | orchestrator | 22:06:10.492 STDOUT terraform:  } 2025-06-01 22:06:10.492343 | orchestrator | 22:06:10.492 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-01 22:06:10.492388 | orchestrator | 22:06:10.492 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 22:06:10.492425 | orchestrator | 22:06:10.492 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:06:10.492463 | orchestrator | 22:06:10.492 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:06:10.492493 | orchestrator | 22:06:10.492 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:06:10.492540 | orchestrator | 22:06:10.492 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.492580 | orchestrator | 22:06:10.492 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:06:10.492592 | orchestrator | 22:06:10.492 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:06:10.492647 | orchestrator | 22:06:10.492 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:06:10.492678 | orchestrator | 22:06:10.492 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:06:10.492717 | orchestrator | 22:06:10.492 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.492747 | orchestrator | 22:06:10.492 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:06:10.492789 | orchestrator | 22:06:10.492 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:06:10.492826 | orchestrator | 22:06:10.492 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:06:10.492865 | orchestrator | 22:06:10.492 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:06:10.492899 | orchestrator | 22:06:10.492 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.492941 | orchestrator | 22:06:10.492 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:06:10.492961 | orchestrator | 22:06:10.492 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.492991 | orchestrator | 22:06:10.492 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.493003 | orchestrator | 22:06:10.492 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:06:10.493055 | orchestrator | 22:06:10.493 STDOUT terraform:  } 2025-06-01 22:06:10.493067 | orchestrator | 22:06:10.493 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.493122 | orchestrator | 22:06:10.493 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 22:06:10.493132 | orchestrator | 22:06:10.493 STDOUT terraform:  } 2025-06-01 22:06:10.493143 | orchestrator | 22:06:10.493 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.493172 | orchestrator | 22:06:10.493 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:06:10.493181 | orchestrator | 22:06:10.493 STDOUT terraform:  } 2025-06-01 22:06:10.493191 | orchestrator | 22:06:10.493 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.493362 | orchestrator | 22:06:10.493 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 22:06:10.493438 | orchestrator | 22:06:10.493 STDOUT terraform:  } 2025-06-01 22:06:10.493453 | orchestrator | 22:06:10.493 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:06:10.493465 | orchestrator | 22:06:10.493 STDOUT terraform:  + fixed_ip { 2025-06-01 22:06:10.493476 | orchestrator | 22:06:10.493 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-01 22:06:10.493488 | orchestrator | 22:06:10.493 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:06:10.493509 | orchestrator | 22:06:10.493 STDOUT terraform:  } 2025-06-01 22:06:10.493520 | orchestrator | 22:06:10.493 STDOUT terraform:  } 2025-06-01 22:06:10.493532 | orchestrator | 22:06:10.493 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-01 22:06:10.493544 | orchestrator | 22:06:10.493 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 22:06:10.493555 | orchestrator | 22:06:10.493 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:06:10.493566 | orchestrator | 22:06:10.493 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:06:10.493580 | orchestrator | 22:06:10.493 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:06:10.493591 | orchestrator | 22:06:10.493 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.493605 | orchestrator | 22:06:10.493 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:06:10.493620 | orchestrator | 22:06:10.493 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:06:10.493678 | orchestrator | 22:06:10.493 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:06:10.493762 | orchestrator | 22:06:10.493 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:06:10.493780 | orchestrator | 22:06:10.493 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.493819 | orchestrator | 22:06:10.493 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:06:10.493864 | orchestrator | 22:06:10.493 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:06:10.493880 | orchestrator | 22:06:10.493 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:06:10.493933 | orchestrator | 22:06:10.493 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:06:10.493951 | orchestrator | 22:06:10.493 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.494010 | orchestrator | 22:06:10.493 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:06:10.494123 | orchestrator | 22:06:10.493 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.494150 | orchestrator | 22:06:10.494 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.494162 | orchestrator | 22:06:10.494 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:06:10.494173 | orchestrator | 22:06:10.494 STDOUT terraform:  } 2025-06-01 22:06:10.494188 | orchestrator | 22:06:10.494 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.494199 | orchestrator | 22:06:10.494 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 22:06:10.494210 | orchestrator | 22:06:10.494 STDOUT terraform:  } 2025-06-01 22:06:10.494221 | orchestrator | 22:06:10.494 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.494234 | orchestrator | 22:06:10.494 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:06:10.494246 | orchestrator | 22:06:10.494 STDOUT terraform:  } 2025-06-01 22:06:10.494260 | orchestrator | 22:06:10.494 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:06:10.494274 | orchestrator | 22:06:10.494 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 22:06:10.494285 | orchestrator | 22:06:10.494 STDOUT terraform:  } 2025-06-01 22:06:10.494299 | orchestrator | 22:06:10.494 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:06:10.494329 | orchestrator | 22:06:10.494 STDOUT terraform:  + fixed_ip { 2025-06-01 22:06:10.494340 | orchestrator | 22:06:10.494 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-01 22:06:10.494354 | orchestrator | 22:06:10.494 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:06:10.494368 | orchestrator | 22:06:10.494 STDOUT terraform:  } 2025-06-01 22:06:10.494382 | orchestrator | 22:06:10.494 STDOUT terraform:  } 2025-06-01 22:06:10.494433 | orchestrator | 22:06:10.494 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-01 22:06:10.494482 | orchestrator | 22:06:10.494 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-01 22:06:10.494499 | orchestrator | 22:06:10.494 STDOUT terraform:  + force_destroy = false 2025-06-01 22:06:10.494537 | orchestrator | 22:06:10.494 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.494578 | orchestrator | 22:06:10.494 STDOUT terraform:  + port_id = (known after apply) 2025-06-01 22:06:10.494594 | orchestrator | 22:06:10.494 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.494633 | orchestrator | 22:06:10.494 STDOUT terraform:  + router_id = (known after apply) 2025-06-01 22:06:10.494660 | orchestrator | 22:06:10.494 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:06:10.494672 | orchestrator | 22:06:10.494 STDOUT terraform:  } 2025-06-01 22:06:10.494711 | orchestrator | 22:06:10.494 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-01 22:06:10.494728 | orchestrator | 22:06:10.494 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-01 22:06:10.494773 | orchestrator | 22:06:10.494 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:06:10.494813 | orchestrator | 22:06:10.494 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.494829 | orchestrator | 22:06:10.494 STDOUT terraform:  + availability_zone_hints = [ 2025-06-01 22:06:10.494844 | orchestrator | 22:06:10.494 STDOUT terraform:  + "nova", 2025-06-01 22:06:10.494862 | orchestrator | 22:06:10.494 STDOUT terraform:  ] 2025-06-01 22:06:10.494891 | orchestrator | 22:06:10.494 STDOUT terraform:  + distributed = (known after apply) 2025-06-01 22:06:10.494931 | orchestrator | 22:06:10.494 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-01 22:06:10.494983 | orchestrator | 22:06:10.494 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-01 22:06:10.495025 | orchestrator | 22:06:10.494 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.495046 | orchestrator | 22:06:10.495 STDOUT terraform:  + name = "testbed" 2025-06-01 22:06:10.495235 | orchestrator | 22:06:10.495 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.495257 | orchestrator | 22:06:10.495 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.495268 | orchestrator | 22:06:10.495 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-01 22:06:10.495279 | orchestrator | 22:06:10.495 STDOUT terraform:  } 2025-06-01 22:06:10.495290 | orchestrator | 22:06:10.495 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-01 22:06:10.495307 | orchestrator | 22:06:10.495 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-01 22:06:10.495318 | orchestrator | 22:06:10.495 STDOUT terraform:  + description = "ssh" 2025-06-01 22:06:10.495367 | orchestrator | 22:06:10.495 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:06:10.495384 | orchestrator | 22:06:10.495 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:06:10.495422 | orchestrator | 22:06:10.495 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.495437 | orchestrator | 22:06:10.495 STDOUT terraform:  + port_range_max = 22 2025-06-01 22:06:10.495474 | orchestrator | 22:06:10.495 STDOUT terraform:  + port_range_min = 22 2025-06-01 22:06:10.495490 | orchestrator | 22:06:10.495 STDOUT terraform:  + protocol = "tcp" 2025-06-01 22:06:10.495516 | orchestrator | 22:06:10.495 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.495552 | orchestrator | 22:06:10.495 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:06:10.495580 | orchestrator | 22:06:10.495 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:06:10.495848 | orchestrator | 22:06:10.495 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:06:10.495875 | orchestrator | 22:06:10.495 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.495887 | orchestrator | 22:06:10.495 STDOUT terraform:  } 2025-06-01 22:06:10.495898 | orchestrator | 22:06:10.495 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-01 22:06:10.495910 | orchestrator | 22:06:10.495 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-01 22:06:10.495920 | orchestrator | 22:06:10.495 STDOUT terraform:  + description = "wireguard" 2025-06-01 22:06:10.495931 | orchestrator | 22:06:10.495 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:06:10.495942 | orchestrator | 22:06:10.495 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:06:10.495953 | orchestrator | 22:06:10.495 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.495964 | orchestrator | 22:06:10.495 STDOUT terraform:  + port_range_max = 51820 2025-06-01 22:06:10.495975 | orchestrator | 22:06:10.495 STDOUT terraform:  + port_range_min = 51820 2025-06-01 22:06:10.495986 | orchestrator | 22:06:10.495 STDOUT terraform:  + protocol = "udp" 2025-06-01 22:06:10.496000 | orchestrator | 22:06:10.495 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.496010 | orchestrator | 22:06:10.495 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:06:10.496021 | orchestrator | 22:06:10.495 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:06:10.496032 | orchestrator | 22:06:10.495 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:06:10.496043 | orchestrator | 22:06:10.495 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.496068 | orchestrator | 22:06:10.495 STDOUT terraform:  } 2025-06-01 22:06:10.496113 | orchestrator | 22:06:10.496 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-01 22:06:10.496138 | orchestrator | 22:06:10.496 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-01 22:06:10.496161 | orchestrator | 22:06:10.496 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:06:10.496186 | orchestrator | 22:06:10.496 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:06:10.496212 | orchestrator | 22:06:10.496 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.496227 | orchestrator | 22:06:10.496 STDOUT terraform:  + protocol = "tcp" 2025-06-01 22:06:10.496277 | orchestrator | 22:06:10.496 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.496294 | orchestrator | 22:06:10.496 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:06:10.496308 | orchestrator | 22:06:10.496 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-01 22:06:10.496353 | orchestrator | 22:06:10.496 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:06:10.496393 | orchestrator | 22:06:10.496 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.496424 | orchestrator | 22:06:10.496 STDOUT terraform:  } 2025-06-01 22:06:10.496439 | orchestrator | 22:06:10.496 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-01 22:06:10.496505 | orchestrator | 22:06:10.496 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-01 22:06:10.496521 | orchestrator | 22:06:10.496 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:06:10.496535 | orchestrator | 22:06:10.496 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:06:10.496576 | orchestrator | 22:06:10.496 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.496592 | orchestrator | 22:06:10.496 STDOUT terraform:  + protocol = "udp" 2025-06-01 22:06:10.496612 | orchestrator | 22:06:10.496 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.496646 | orchestrator | 22:06:10.496 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:06:10.496673 | orchestrator | 22:06:10.496 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-01 22:06:10.502601 | orchestrator | 22:06:10.496 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:06:10.502656 | orchestrator | 22:06:10.496 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.502665 | orchestrator | 22:06:10.496 STDOUT terraform:  } 2025-06-01 22:06:10.502674 | orchestrator | 22:06:10.496 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-01 22:06:10.502683 | orchestrator | 22:06:10.496 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-01 22:06:10.502691 | orchestrator | 22:06:10.496 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:06:10.502699 | orchestrator | 22:06:10.496 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:06:10.502707 | orchestrator | 22:06:10.496 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.502715 | orchestrator | 22:06:10.496 STDOUT terraform:  + protocol = "icmp" 2025-06-01 22:06:10.502723 | orchestrator | 22:06:10.496 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.502731 | orchestrator | 22:06:10.496 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:06:10.502738 | orchestrator | 22:06:10.496 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:06:10.502746 | orchestrator | 22:06:10.496 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:06:10.502754 | orchestrator | 22:06:10.497 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.502761 | orchestrator | 22:06:10.497 STDOUT terraform:  } 2025-06-01 22:06:10.502769 | orchestrator | 22:06:10.497 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-01 22:06:10.502777 | orchestrator | 22:06:10.497 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-01 22:06:10.502785 | orchestrator | 22:06:10.497 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:06:10.502794 | orchestrator | 22:06:10.497 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:06:10.502815 | orchestrator | 22:06:10.497 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.502831 | orchestrator | 22:06:10.497 STDOUT terraform:  + protocol = "tcp" 2025-06-01 22:06:10.502839 | orchestrator | 22:06:10.497 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.502847 | orchestrator | 22:06:10.497 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:06:10.502855 | orchestrator | 22:06:10.497 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:06:10.502862 | orchestrator | 22:06:10.497 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:06:10.502870 | orchestrator | 22:06:10.497 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.502878 | orchestrator | 22:06:10.497 STDOUT terraform:  } 2025-06-01 22:06:10.502886 | orchestrator | 22:06:10.497 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-01 22:06:10.502893 | orchestrator | 22:06:10.497 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-01 22:06:10.502901 | orchestrator | 22:06:10.497 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:06:10.502909 | orchestrator | 22:06:10.497 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:06:10.502917 | orchestrator | 22:06:10.497 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.502924 | orchestrator | 22:06:10.497 STDOUT terraform:  + protocol = "udp" 2025-06-01 22:06:10.502932 | orchestrator | 22:06:10.497 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.502940 | orchestrator | 22:06:10.497 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:06:10.502948 | orchestrator | 22:06:10.497 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:06:10.502967 | orchestrator | 22:06:10.497 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:06:10.502976 | orchestrator | 22:06:10.497 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.502984 | orchestrator | 22:06:10.497 STDOUT terraform:  } 2025-06-01 22:06:10.502992 | orchestrator | 22:06:10.497 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-01 22:06:10.503000 | orchestrator | 22:06:10.497 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-01 22:06:10.503008 | orchestrator | 22:06:10.497 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:06:10.503016 | orchestrator | 22:06:10.497 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:06:10.503023 | orchestrator | 22:06:10.497 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.503031 | orchestrator | 22:06:10.497 STDOUT terraform:  + protocol = "icmp" 2025-06-01 22:06:10.503039 | orchestrator | 22:06:10.497 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.503047 | orchestrator | 22:06:10.497 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:06:10.503055 | orchestrator | 22:06:10.497 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:06:10.503068 | orchestrator | 22:06:10.497 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:06:10.503092 | orchestrator | 22:06:10.497 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.503100 | orchestrator | 22:06:10.498 STDOUT terraform:  } 2025-06-01 22:06:10.503108 | orchestrator | 22:06:10.498 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-01 22:06:10.503116 | orchestrator | 22:06:10.498 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-01 22:06:10.503124 | orchestrator | 22:06:10.498 STDOUT terraform:  + description = "vrrp" 2025-06-01 22:06:10.503132 | orchestrator | 22:06:10.498 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:06:10.503140 | orchestrator | 22:06:10.498 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:06:10.503148 | orchestrator | 22:06:10.498 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.503160 | orchestrator | 22:06:10.498 STDOUT terraform:  + protocol = "112" 2025-06-01 22:06:10.503168 | orchestrator | 22:06:10.498 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.503176 | orchestrator | 22:06:10.498 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:06:10.503184 | orchestrator | 22:06:10.498 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:06:10.503191 | orchestrator | 22:06:10.498 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:06:10.503199 | orchestrator | 22:06:10.498 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.503207 | orchestrator | 22:06:10.498 STDOUT terraform:  } 2025-06-01 22:06:10.503215 | orchestrator | 22:06:10.498 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-01 22:06:10.503223 | orchestrator | 22:06:10.498 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-01 22:06:10.503230 | orchestrator | 22:06:10.498 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.503238 | orchestrator | 22:06:10.498 STDOUT terraform:  + description = "management security group" 2025-06-01 22:06:10.503246 | orchestrator | 22:06:10.498 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.503254 | orchestrator | 22:06:10.498 STDOUT terraform:  + name = "testbed-management" 2025-06-01 22:06:10.503262 | orchestrator | 22:06:10.498 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.503270 | orchestrator | 22:06:10.498 STDOUT terraform:  + stateful = (known after apply) 2025-06-01 22:06:10.503278 | orchestrator | 22:06:10.498 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.503286 | orchestrator | 22:06:10.498 STDOUT terraform:  } 2025-06-01 22:06:10.503299 | orchestrator | 22:06:10.498 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-01 22:06:10.503307 | orchestrator | 22:06:10.498 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-01 22:06:10.503315 | orchestrator | 22:06:10.498 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.503323 | orchestrator | 22:06:10.498 STDOUT terraform:  + description = "node security group" 2025-06-01 22:06:10.503336 | orchestrator | 22:06:10.498 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.503344 | orchestrator | 22:06:10.498 STDOUT terraform:  + name = "testbed-node" 2025-06-01 22:06:10.503352 | orchestrator | 22:06:10.498 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.503360 | orchestrator | 22:06:10.498 STDOUT terraform:  + stateful = (known after apply) 2025-06-01 22:06:10.503367 | orchestrator | 22:06:10.498 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.503375 | orchestrator | 22:06:10.498 STDOUT terraform:  } 2025-06-01 22:06:10.503383 | orchestrator | 22:06:10.498 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-01 22:06:10.503391 | orchestrator | 22:06:10.498 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-01 22:06:10.503399 | orchestrator | 22:06:10.499 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:06:10.503407 | orchestrator | 22:06:10.499 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-01 22:06:10.503414 | orchestrator | 22:06:10.499 STDOUT terraform:  + dns_nameservers = [ 2025-06-01 22:06:10.503423 | orchestrator | 22:06:10.499 STDOUT terraform:  + "8.8.8.8", 2025-06-01 22:06:10.503430 | orchestrator | 22:06:10.499 STDOUT terraform:  + "9.9.9.9", 2025-06-01 22:06:10.503438 | orchestrator | 22:06:10.499 STDOUT terraform:  ] 2025-06-01 22:06:10.503446 | orchestrator | 22:06:10.499 STDOUT terraform:  + enable_dhcp = true 2025-06-01 22:06:10.503454 | orchestrator | 22:06:10.499 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-01 22:06:10.503462 | orchestrator | 22:06:10.499 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.503469 | orchestrator | 22:06:10.499 STDOUT terraform:  + ip_version = 4 2025-06-01 22:06:10.503477 | orchestrator | 22:06:10.499 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-01 22:06:10.503485 | orchestrator | 22:06:10.499 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-01 22:06:10.503493 | orchestrator | 22:06:10.499 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-01 22:06:10.503501 | orchestrator | 22:06:10.499 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:06:10.503509 | orchestrator | 22:06:10.499 STDOUT terraform:  + no_gateway = false 2025-06-01 22:06:10.503516 | orchestrator | 22:06:10.499 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:06:10.503524 | orchestrator | 22:06:10.499 STDOUT terraform:  + service_types = (known after apply) 2025-06-01 22:06:10.503532 | orchestrator | 22:06:10.499 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:06:10.503540 | orchestrator | 22:06:10.499 STDOUT terraform:  + allocation_pool { 2025-06-01 22:06:10.503548 | orchestrator | 22:06:10.499 STDOUT terraform:  + end = "192.168.31.250" 2025-06-01 22:06:10.503555 | orchestrator | 22:06:10.499 STDOUT terraform:  + start = "192.168.31.200" 2025-06-01 22:06:10.503563 | orchestrator | 22:06:10.499 STDOUT terraform:  } 2025-06-01 22:06:10.503571 | orchestrator | 22:06:10.499 STDOUT terraform:  } 2025-06-01 22:06:10.503582 | orchestrator | 22:06:10.499 STDOUT terraform:  # terraform_data.image will be created 2025-06-01 22:06:10.503620 | orchestrator | 22:06:10.499 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-01 22:06:10.503629 | orchestrator | 22:06:10.499 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.503637 | orchestrator | 22:06:10.499 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-01 22:06:10.503649 | orchestrator | 22:06:10.499 STDOUT terraform:  + output = (known after apply) 2025-06-01 22:06:10.503657 | orchestrator | 22:06:10.499 STDOUT terraform:  } 2025-06-01 22:06:10.503665 | orchestrator | 22:06:10.499 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-01 22:06:10.503673 | orchestrator | 22:06:10.499 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-01 22:06:10.503680 | orchestrator | 22:06:10.499 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:06:10.503688 | orchestrator | 22:06:10.499 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-01 22:06:10.503696 | orchestrator | 22:06:10.499 STDOUT terraform:  + output = (known after apply) 2025-06-01 22:06:10.503704 | orchestrator | 22:06:10.499 STDOUT terraform:  } 2025-06-01 22:06:10.503711 | orchestrator | 22:06:10.499 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-01 22:06:10.503719 | orchestrator | 22:06:10.499 STDOUT terraform: Changes to Outputs: 2025-06-01 22:06:10.503727 | orchestrator | 22:06:10.499 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-01 22:06:10.503735 | orchestrator | 22:06:10.499 STDOUT terraform:  + private_key = (sensitive value) 2025-06-01 22:06:10.739943 | orchestrator | 22:06:10.739 STDOUT terraform: terraform_data.image: Creating... 2025-06-01 22:06:10.740023 | orchestrator | 22:06:10.739 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-01 22:06:10.740179 | orchestrator | 22:06:10.740 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=bc13afac-bcab-20ab-2458-202944e486a3] 2025-06-01 22:06:10.740339 | orchestrator | 22:06:10.740 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=a2316674-b953-f861-93ee-1f44ae698541] 2025-06-01 22:06:10.754439 | orchestrator | 22:06:10.754 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-01 22:06:10.765448 | orchestrator | 22:06:10.765 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-01 22:06:10.765634 | orchestrator | 22:06:10.765 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-01 22:06:10.766135 | orchestrator | 22:06:10.765 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-01 22:06:10.766426 | orchestrator | 22:06:10.766 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-01 22:06:10.768332 | orchestrator | 22:06:10.768 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-01 22:06:10.768792 | orchestrator | 22:06:10.768 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-01 22:06:10.769241 | orchestrator | 22:06:10.769 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-01 22:06:10.769901 | orchestrator | 22:06:10.769 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-01 22:06:10.773223 | orchestrator | 22:06:10.773 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-01 22:06:11.199772 | orchestrator | 22:06:11.199 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-01 22:06:11.639518 | orchestrator | 22:06:11.207 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-01 22:06:11.639597 | orchestrator | 22:06:11.233 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-06-01 22:06:11.639612 | orchestrator | 22:06:11.241 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-01 22:06:11.639623 | orchestrator | 22:06:11.511 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-01 22:06:11.639635 | orchestrator | 22:06:11.519 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-01 22:06:16.739837 | orchestrator | 22:06:16.739 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=1f4394df-71c3-4405-801f-d7143bf017b6] 2025-06-01 22:06:16.751376 | orchestrator | 22:06:16.751 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-01 22:06:20.768795 | orchestrator | 22:06:20.768 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-01 22:06:20.768918 | orchestrator | 22:06:20.768 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-01 22:06:20.769521 | orchestrator | 22:06:20.769 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-01 22:06:20.770566 | orchestrator | 22:06:20.770 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-01 22:06:20.771759 | orchestrator | 22:06:20.771 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-01 22:06:20.774214 | orchestrator | 22:06:20.773 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-01 22:06:21.208774 | orchestrator | 22:06:21.208 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-01 22:06:21.242954 | orchestrator | 22:06:21.242 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-01 22:06:21.352842 | orchestrator | 22:06:21.352 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=6d04dff8-74fe-4097-ace0-4c437e5e0f9f] 2025-06-01 22:06:21.354464 | orchestrator | 22:06:21.354 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=bed9961c-b7ee-4957-bf35-2fee53571a5a] 2025-06-01 22:06:21.361282 | orchestrator | 22:06:21.361 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-01 22:06:21.367099 | orchestrator | 22:06:21.366 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4] 2025-06-01 22:06:21.368629 | orchestrator | 22:06:21.368 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-01 22:06:21.374261 | orchestrator | 22:06:21.374 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=874206c32f1300be0d270b0226af2aa2ccb3efee] 2025-06-01 22:06:21.374348 | orchestrator | 22:06:21.374 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=03d5907f099a757ac275c98a7eba08b5d8394282] 2025-06-01 22:06:21.374412 | orchestrator | 22:06:21.374 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-01 22:06:21.379839 | orchestrator | 22:06:21.378 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=540779ba-6163-469a-a896-cda4c9a0c816] 2025-06-01 22:06:21.379910 | orchestrator | 22:06:21.378 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=e3d9d8cc-8358-4e9f-a548-9ae6b89fa066] 2025-06-01 22:06:21.382763 | orchestrator | 22:06:21.382 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-01 22:06:21.383016 | orchestrator | 22:06:21.382 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-01 22:06:21.389938 | orchestrator | 22:06:21.389 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-01 22:06:21.391436 | orchestrator | 22:06:21.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-01 22:06:21.395982 | orchestrator | 22:06:21.395 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=44465191-0fa1-4c22-9234-5804ca50669c] 2025-06-01 22:06:21.401936 | orchestrator | 22:06:21.401 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-01 22:06:21.440229 | orchestrator | 22:06:21.440 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=a15d8421-e56a-4621-aed8-2eaa8f026081] 2025-06-01 22:06:21.446944 | orchestrator | 22:06:21.446 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-01 22:06:21.453277 | orchestrator | 22:06:21.452 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=8eb07f49-902f-451e-9ead-836ebd4b9d37] 2025-06-01 22:06:21.520927 | orchestrator | 22:06:21.520 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-01 22:06:21.689395 | orchestrator | 22:06:21.689 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=a931087f-71b6-44f2-a559-c8deb4b3c146] 2025-06-01 22:06:26.754767 | orchestrator | 22:06:26.754 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-01 22:06:27.071531 | orchestrator | 22:06:27.071 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=596510d6-e7e8-4089-bbfd-3011d7f0a95f] 2025-06-01 22:06:27.322190 | orchestrator | 22:06:27.321 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=79553f25-6421-4ead-bcbe-634fa6e1de01] 2025-06-01 22:06:27.331365 | orchestrator | 22:06:27.331 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-01 22:06:31.376299 | orchestrator | 22:06:31.375 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-01 22:06:31.387237 | orchestrator | 22:06:31.386 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-01 22:06:31.387337 | orchestrator | 22:06:31.387 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-01 22:06:31.393557 | orchestrator | 22:06:31.393 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-01 22:06:31.393685 | orchestrator | 22:06:31.393 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-01 22:06:31.402805 | orchestrator | 22:06:31.402 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-01 22:06:31.724398 | orchestrator | 22:06:31.723 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e] 2025-06-01 22:06:31.783591 | orchestrator | 22:06:31.783 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d] 2025-06-01 22:06:31.795334 | orchestrator | 22:06:31.794 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=9792ff73-3fa5-45fc-a415-ec3ce4efc22b] 2025-06-01 22:06:31.798653 | orchestrator | 22:06:31.798 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=658bfcf8-ebe2-4dc5-9176-cd4fbed88c65] 2025-06-01 22:06:31.818442 | orchestrator | 22:06:31.818 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=3516484c-810d-4999-9d3f-5a7b207baf66] 2025-06-01 22:06:31.832170 | orchestrator | 22:06:31.831 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=c48a14af-6166-400d-9965-9cbf579c714a] 2025-06-01 22:06:34.755187 | orchestrator | 22:06:34.754 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=50c7b29b-a2fa-4c37-9340-9a387baae1ad] 2025-06-01 22:06:34.762639 | orchestrator | 22:06:34.762 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-01 22:06:34.762712 | orchestrator | 22:06:34.762 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-01 22:06:34.765032 | orchestrator | 22:06:34.764 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-01 22:06:34.963768 | orchestrator | 22:06:34.963 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=3ebe2081-3d0f-464e-ba46-ce0a83c4f041] 2025-06-01 22:06:34.975719 | orchestrator | 22:06:34.975 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-01 22:06:34.978193 | orchestrator | 22:06:34.977 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-01 22:06:34.981337 | orchestrator | 22:06:34.981 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-01 22:06:34.982866 | orchestrator | 22:06:34.982 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-01 22:06:34.984630 | orchestrator | 22:06:34.984 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-01 22:06:34.985682 | orchestrator | 22:06:34.985 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-01 22:06:35.178948 | orchestrator | 22:06:35.178 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=d0e45180-2999-4490-86c0-ef06869be019] 2025-06-01 22:06:35.467699 | orchestrator | 22:06:35.467 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=e392ebe5-7fd7-406d-8f84-410ca25577fe] 2025-06-01 22:06:35.489418 | orchestrator | 22:06:35.489 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=c360a7ed-ef46-46d6-a491-fac84d64af6a] 2025-06-01 22:06:35.506500 | orchestrator | 22:06:35.506 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-01 22:06:35.507011 | orchestrator | 22:06:35.506 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-01 22:06:35.515607 | orchestrator | 22:06:35.515 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-01 22:06:35.516296 | orchestrator | 22:06:35.516 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-01 22:06:35.517890 | orchestrator | 22:06:35.517 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-01 22:06:35.626433 | orchestrator | 22:06:35.626 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=427e14df-0b1f-40ea-9eee-d778fe77c5bb] 2025-06-01 22:06:35.641469 | orchestrator | 22:06:35.641 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-01 22:06:35.691542 | orchestrator | 22:06:35.691 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=31f0b55b-f13d-4a35-b460-dbf2319dc03e] 2025-06-01 22:06:35.707289 | orchestrator | 22:06:35.707 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-01 22:06:35.801407 | orchestrator | 22:06:35.800 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=28209a18-1362-4483-9a6c-28900e984c32] 2025-06-01 22:06:35.821119 | orchestrator | 22:06:35.820 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-01 22:06:35.856557 | orchestrator | 22:06:35.856 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=a8a3e676-7321-4b11-a958-77172aeaa620] 2025-06-01 22:06:35.863450 | orchestrator | 22:06:35.863 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-01 22:06:36.021251 | orchestrator | 22:06:36.020 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=7d7582b3-d31a-4c01-8072-55421fae75ca] 2025-06-01 22:06:36.030040 | orchestrator | 22:06:36.029 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-01 22:06:36.052448 | orchestrator | 22:06:36.052 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=cc3375b4-f3a4-485a-b77f-631755638708] 2025-06-01 22:06:36.222581 | orchestrator | 22:06:36.222 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=f2dad6ba-0cef-4033-a1d2-b6c2c2f4487c] 2025-06-01 22:06:40.571556 | orchestrator | 22:06:40.571 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=cd60691d-f968-4085-847c-1ec6c9adc227] 2025-06-01 22:06:41.091074 | orchestrator | 22:06:41.090 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=d1177117-0366-4205-ade2-2abce22fb034] 2025-06-01 22:06:41.154116 | orchestrator | 22:06:41.153 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=dd490ff8-8eb8-4b79-b69e-9aabd54c8042] 2025-06-01 22:06:41.204426 | orchestrator | 22:06:41.204 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 5s [id=7bf2f14b-4afe-40fa-b7fd-83eb895ae8ff] 2025-06-01 22:06:41.301658 | orchestrator | 22:06:41.301 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=22496197-7f8a-45f6-96f3-fa2eb73194e6] 2025-06-01 22:06:41.404474 | orchestrator | 22:06:41.404 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=18071a85-dea1-40aa-bf27-196bb74f3cf9] 2025-06-01 22:06:41.495684 | orchestrator | 22:06:41.495 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=1c1ac932-7e49-4613-8c1a-1da9160dc074] 2025-06-01 22:06:41.693374 | orchestrator | 22:06:41.692 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=b36b91f2-bd3c-4676-b6fe-942c0bfd23bd] 2025-06-01 22:06:41.712984 | orchestrator | 22:06:41.712 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-01 22:06:41.726931 | orchestrator | 22:06:41.726 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-01 22:06:41.736656 | orchestrator | 22:06:41.736 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-01 22:06:41.740769 | orchestrator | 22:06:41.740 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-01 22:06:41.746568 | orchestrator | 22:06:41.746 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-01 22:06:41.746929 | orchestrator | 22:06:41.746 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-01 22:06:41.750568 | orchestrator | 22:06:41.750 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-01 22:06:48.571838 | orchestrator | 22:06:48.571 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=f1886f2c-7350-4241-a264-9db25ddc1d70] 2025-06-01 22:06:48.580362 | orchestrator | 22:06:48.579 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-01 22:06:48.586530 | orchestrator | 22:06:48.585 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-01 22:06:48.586589 | orchestrator | 22:06:48.585 STDOUT terraform: local_file.inventory: Creating... 2025-06-01 22:06:48.589172 | orchestrator | 22:06:48.589 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=5bec058f56f8ab45c0677f303f1af199cae84fdb] 2025-06-01 22:06:48.589805 | orchestrator | 22:06:48.589 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ed240747ce91cdf52f664c8cecd3e9af479dc6ae] 2025-06-01 22:06:49.315564 | orchestrator | 22:06:49.315 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=f1886f2c-7350-4241-a264-9db25ddc1d70] 2025-06-01 22:06:51.733050 | orchestrator | 22:06:51.732 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-01 22:06:51.737183 | orchestrator | 22:06:51.736 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-01 22:06:51.741357 | orchestrator | 22:06:51.741 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-01 22:06:51.747592 | orchestrator | 22:06:51.747 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-01 22:06:51.747667 | orchestrator | 22:06:51.747 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-01 22:06:51.753235 | orchestrator | 22:06:51.752 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-01 22:07:01.735655 | orchestrator | 22:07:01.735 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-01 22:07:01.738364 | orchestrator | 22:07:01.738 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-01 22:07:01.742562 | orchestrator | 22:07:01.742 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-01 22:07:01.748126 | orchestrator | 22:07:01.747 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-01 22:07:01.748180 | orchestrator | 22:07:01.748 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-01 22:07:01.753433 | orchestrator | 22:07:01.753 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-01 22:07:02.172118 | orchestrator | 22:07:02.171 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=ed66b759-f0ac-4825-9315-f1c243b3ea8c] 2025-06-01 22:07:02.329232 | orchestrator | 22:07:02.328 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=c70060fe-53e5-4e8e-87fd-34029f9d38fb] 2025-06-01 22:07:02.340405 | orchestrator | 22:07:02.340 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=7afae0a1-68a3-4720-8940-64fbcdb1d8b8] 2025-06-01 22:07:02.553006 | orchestrator | 22:07:02.552 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=02645717-42bd-4182-adcc-ee3594914074] 2025-06-01 22:07:11.739538 | orchestrator | 22:07:11.739 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-01 22:07:11.754140 | orchestrator | 22:07:11.753 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-01 22:07:12.696938 | orchestrator | 22:07:12.696 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=1a9b30cd-3610-4301-9d74-6d6e1d5884eb] 2025-06-01 22:07:12.753853 | orchestrator | 22:07:12.753 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=62bea18c-fc22-4f53-beea-a0b92c7b19f3] 2025-06-01 22:07:12.785937 | orchestrator | 22:07:12.785 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-01 22:07:12.788561 | orchestrator | 22:07:12.788 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-01 22:07:12.793318 | orchestrator | 22:07:12.793 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=4362766164350100788] 2025-06-01 22:07:12.795075 | orchestrator | 22:07:12.794 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-01 22:07:12.795948 | orchestrator | 22:07:12.795 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-01 22:07:12.796626 | orchestrator | 22:07:12.796 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-01 22:07:12.796846 | orchestrator | 22:07:12.796 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-01 22:07:12.796996 | orchestrator | 22:07:12.796 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-01 22:07:12.797568 | orchestrator | 22:07:12.797 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-01 22:07:12.801873 | orchestrator | 22:07:12.801 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-01 22:07:12.820607 | orchestrator | 22:07:12.820 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-01 22:07:12.832534 | orchestrator | 22:07:12.832 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-01 22:07:18.128968 | orchestrator | 22:07:18.128 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=7afae0a1-68a3-4720-8940-64fbcdb1d8b8/6d04dff8-74fe-4097-ace0-4c437e5e0f9f] 2025-06-01 22:07:18.136128 | orchestrator | 22:07:18.130 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=c70060fe-53e5-4e8e-87fd-34029f9d38fb/f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4] 2025-06-01 22:07:18.159479 | orchestrator | 22:07:18.158 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=02645717-42bd-4182-adcc-ee3594914074/a931087f-71b6-44f2-a559-c8deb4b3c146] 2025-06-01 22:07:18.165449 | orchestrator | 22:07:18.165 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=c70060fe-53e5-4e8e-87fd-34029f9d38fb/a15d8421-e56a-4621-aed8-2eaa8f026081] 2025-06-01 22:07:18.170766 | orchestrator | 22:07:18.170 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=7afae0a1-68a3-4720-8940-64fbcdb1d8b8/bed9961c-b7ee-4957-bf35-2fee53571a5a] 2025-06-01 22:07:18.186436 | orchestrator | 22:07:18.186 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=02645717-42bd-4182-adcc-ee3594914074/44465191-0fa1-4c22-9234-5804ca50669c] 2025-06-01 22:07:18.199408 | orchestrator | 22:07:18.199 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=c70060fe-53e5-4e8e-87fd-34029f9d38fb/540779ba-6163-469a-a896-cda4c9a0c816] 2025-06-01 22:07:18.200286 | orchestrator | 22:07:18.199 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=02645717-42bd-4182-adcc-ee3594914074/8eb07f49-902f-451e-9ead-836ebd4b9d37] 2025-06-01 22:07:18.219519 | orchestrator | 22:07:18.219 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=7afae0a1-68a3-4720-8940-64fbcdb1d8b8/e3d9d8cc-8358-4e9f-a548-9ae6b89fa066] 2025-06-01 22:07:22.834395 | orchestrator | 22:07:22.834 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-01 22:07:32.836027 | orchestrator | 22:07:32.835 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-01 22:07:33.198424 | orchestrator | 22:07:33.197 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=d4037fd7-525a-4891-847c-a81d974a0c1b] 2025-06-01 22:07:33.223582 | orchestrator | 22:07:33.223 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-01 22:07:33.223683 | orchestrator | 22:07:33.223 STDOUT terraform: Outputs: 2025-06-01 22:07:33.223713 | orchestrator | 22:07:33.223 STDOUT terraform: manager_address = 2025-06-01 22:07:33.223744 | orchestrator | 22:07:33.223 STDOUT terraform: private_key = 2025-06-01 22:07:33.699814 | orchestrator | ok: Runtime: 0:01:33.544781 2025-06-01 22:07:33.738245 | 2025-06-01 22:07:33.738386 | TASK [Fetch manager address] 2025-06-01 22:07:34.183839 | orchestrator | ok 2025-06-01 22:07:34.195767 | 2025-06-01 22:07:34.195931 | TASK [Set manager_host address] 2025-06-01 22:07:34.270949 | orchestrator | ok 2025-06-01 22:07:34.280152 | 2025-06-01 22:07:34.280280 | LOOP [Update ansible collections] 2025-06-01 22:07:36.872879 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-01 22:07:36.873175 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-01 22:07:36.873216 | orchestrator | Starting galaxy collection install process 2025-06-01 22:07:36.873244 | orchestrator | Process install dependency map 2025-06-01 22:07:36.873269 | orchestrator | Starting collection install process 2025-06-01 22:07:36.873292 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-06-01 22:07:36.873319 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-06-01 22:07:36.873347 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-01 22:07:36.873396 | orchestrator | ok: Item: commons Runtime: 0:00:02.249154 2025-06-01 22:07:37.713082 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-01 22:07:37.713248 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-01 22:07:37.713299 | orchestrator | Starting galaxy collection install process 2025-06-01 22:07:37.713339 | orchestrator | Process install dependency map 2025-06-01 22:07:37.713373 | orchestrator | Starting collection install process 2025-06-01 22:07:37.713406 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-06-01 22:07:37.713439 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-06-01 22:07:37.713469 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-01 22:07:37.713518 | orchestrator | ok: Item: services Runtime: 0:00:00.580941 2025-06-01 22:07:37.738989 | 2025-06-01 22:07:37.739157 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-01 22:07:48.315100 | orchestrator | ok 2025-06-01 22:07:48.325983 | 2025-06-01 22:07:48.326112 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-01 22:08:48.372431 | orchestrator | ok 2025-06-01 22:08:48.384767 | 2025-06-01 22:08:48.384912 | TASK [Fetch manager ssh hostkey] 2025-06-01 22:08:49.964878 | orchestrator | Output suppressed because no_log was given 2025-06-01 22:08:49.972902 | 2025-06-01 22:08:49.973053 | TASK [Get ssh keypair from terraform environment] 2025-06-01 22:08:50.507319 | orchestrator | ok: Runtime: 0:00:00.010540 2025-06-01 22:08:50.524544 | 2025-06-01 22:08:50.524726 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-01 22:08:50.560900 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-01 22:08:50.570375 | 2025-06-01 22:08:50.570542 | TASK [Run manager part 0] 2025-06-01 22:08:51.583130 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-01 22:08:51.626433 | orchestrator | 2025-06-01 22:08:51.626508 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-01 22:08:51.626527 | orchestrator | 2025-06-01 22:08:51.626560 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-01 22:08:53.599734 | orchestrator | ok: [testbed-manager] 2025-06-01 22:08:53.599815 | orchestrator | 2025-06-01 22:08:53.599877 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-01 22:08:53.599908 | orchestrator | 2025-06-01 22:08:53.599937 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:08:55.653545 | orchestrator | ok: [testbed-manager] 2025-06-01 22:08:55.653585 | orchestrator | 2025-06-01 22:08:55.653592 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-01 22:08:56.382392 | orchestrator | ok: [testbed-manager] 2025-06-01 22:08:56.382436 | orchestrator | 2025-06-01 22:08:56.382444 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-01 22:08:56.434911 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:56.434972 | orchestrator | 2025-06-01 22:08:56.435008 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-01 22:08:56.473377 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:56.473424 | orchestrator | 2025-06-01 22:08:56.473435 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-01 22:08:56.503462 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:56.503503 | orchestrator | 2025-06-01 22:08:56.503510 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-01 22:08:56.534908 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:56.534953 | orchestrator | 2025-06-01 22:08:56.534960 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-01 22:08:56.564051 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:56.564092 | orchestrator | 2025-06-01 22:08:56.564098 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-01 22:08:56.596866 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:56.596905 | orchestrator | 2025-06-01 22:08:56.596913 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-01 22:08:56.626149 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:56.626188 | orchestrator | 2025-06-01 22:08:56.626195 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-01 22:08:57.474560 | orchestrator | changed: [testbed-manager] 2025-06-01 22:08:57.474613 | orchestrator | 2025-06-01 22:08:57.474624 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-01 22:12:14.673868 | orchestrator | changed: [testbed-manager] 2025-06-01 22:12:14.673957 | orchestrator | 2025-06-01 22:12:14.673975 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-01 22:13:38.802834 | orchestrator | changed: [testbed-manager] 2025-06-01 22:13:38.802897 | orchestrator | 2025-06-01 22:13:38.802905 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-01 22:14:03.633994 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:03.634083 | orchestrator | 2025-06-01 22:14:03.634094 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-01 22:14:13.013275 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:13.013318 | orchestrator | 2025-06-01 22:14:13.013326 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-01 22:14:13.061633 | orchestrator | ok: [testbed-manager] 2025-06-01 22:14:13.061680 | orchestrator | 2025-06-01 22:14:13.061693 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-01 22:14:13.837422 | orchestrator | ok: [testbed-manager] 2025-06-01 22:14:13.837495 | orchestrator | 2025-06-01 22:14:13.837510 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-01 22:14:14.616640 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:14.616741 | orchestrator | 2025-06-01 22:14:14.616759 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-01 22:14:21.268667 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:21.268718 | orchestrator | 2025-06-01 22:14:21.268753 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-01 22:14:27.535632 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:27.535738 | orchestrator | 2025-06-01 22:14:27.535757 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-01 22:14:30.306117 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:30.306206 | orchestrator | 2025-06-01 22:14:30.306222 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-01 22:14:32.229620 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:32.229701 | orchestrator | 2025-06-01 22:14:32.229716 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-01 22:14:33.404371 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-01 22:14:33.404418 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-01 22:14:33.404426 | orchestrator | 2025-06-01 22:14:33.404434 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-01 22:14:33.444948 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-01 22:14:33.445026 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-01 22:14:33.445062 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-01 22:14:33.445077 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-01 22:14:38.247631 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-01 22:14:38.247725 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-01 22:14:38.247740 | orchestrator | 2025-06-01 22:14:38.247752 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-01 22:14:38.842459 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:38.842540 | orchestrator | 2025-06-01 22:14:38.842556 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-01 22:17:08.946542 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-01 22:17:08.946641 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-01 22:17:08.946661 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-01 22:17:08.946674 | orchestrator | 2025-06-01 22:17:08.946687 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-01 22:17:11.333419 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-01 22:17:11.333501 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-01 22:17:11.333517 | orchestrator | 2025-06-01 22:17:11.333530 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-01 22:17:11.333542 | orchestrator | 2025-06-01 22:17:11.333554 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:17:12.765595 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:12.765686 | orchestrator | 2025-06-01 22:17:12.765703 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-01 22:17:12.812830 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:12.812899 | orchestrator | 2025-06-01 22:17:12.812910 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-01 22:17:12.885667 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:12.885739 | orchestrator | 2025-06-01 22:17:12.885755 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-01 22:17:13.639809 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:13.639848 | orchestrator | 2025-06-01 22:17:13.639856 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-01 22:17:14.393876 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:14.394736 | orchestrator | 2025-06-01 22:17:14.394761 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-01 22:17:15.812594 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-01 22:17:15.812677 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-01 22:17:15.812691 | orchestrator | 2025-06-01 22:17:15.812717 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-01 22:17:17.259906 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:17.259991 | orchestrator | 2025-06-01 22:17:17.260007 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-01 22:17:19.058661 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:17:19.058706 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-01 22:17:19.058715 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:17:19.058722 | orchestrator | 2025-06-01 22:17:19.058730 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-01 22:17:19.651374 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:19.651466 | orchestrator | 2025-06-01 22:17:19.651484 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-01 22:17:19.717453 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:17:19.717518 | orchestrator | 2025-06-01 22:17:19.717532 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-01 22:17:20.589886 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 22:17:20.590117 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:20.590181 | orchestrator | 2025-06-01 22:17:20.590196 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-01 22:17:20.633596 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:17:20.633664 | orchestrator | 2025-06-01 22:17:20.633679 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-01 22:17:20.669776 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:17:20.669826 | orchestrator | 2025-06-01 22:17:20.669839 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-01 22:17:20.701826 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:17:20.701860 | orchestrator | 2025-06-01 22:17:20.701872 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-01 22:17:20.751259 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:17:20.751317 | orchestrator | 2025-06-01 22:17:20.751332 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-01 22:17:21.475520 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:21.475598 | orchestrator | 2025-06-01 22:17:21.475613 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-01 22:17:21.475626 | orchestrator | 2025-06-01 22:17:21.475639 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:17:22.925785 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:22.925875 | orchestrator | 2025-06-01 22:17:22.925892 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-01 22:17:23.932222 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:23.932309 | orchestrator | 2025-06-01 22:17:23.932327 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:17:23.932341 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-01 22:17:23.932357 | orchestrator | 2025-06-01 22:17:24.416728 | orchestrator | ok: Runtime: 0:08:33.153042 2025-06-01 22:17:24.433724 | 2025-06-01 22:17:24.433865 | TASK [Point out that the log in on the manager is now possible] 2025-06-01 22:17:24.481824 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-01 22:17:24.491279 | 2025-06-01 22:17:24.491457 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-01 22:17:24.530061 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-01 22:17:24.540286 | 2025-06-01 22:17:24.540445 | TASK [Run manager part 1 + 2] 2025-06-01 22:17:25.455975 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-01 22:17:25.513697 | orchestrator | 2025-06-01 22:17:25.513748 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-01 22:17:25.513755 | orchestrator | 2025-06-01 22:17:25.513768 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:17:28.598182 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:28.598246 | orchestrator | 2025-06-01 22:17:28.598276 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-01 22:17:28.638525 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:17:28.638574 | orchestrator | 2025-06-01 22:17:28.638584 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-01 22:17:28.680812 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:28.680865 | orchestrator | 2025-06-01 22:17:28.680876 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-01 22:17:28.717793 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:28.717838 | orchestrator | 2025-06-01 22:17:28.717847 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-01 22:17:28.782282 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:28.782334 | orchestrator | 2025-06-01 22:17:28.782344 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-01 22:17:28.842086 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:28.842160 | orchestrator | 2025-06-01 22:17:28.842171 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-01 22:17:28.889575 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-01 22:17:28.889614 | orchestrator | 2025-06-01 22:17:28.889619 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-01 22:17:29.647848 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:29.647907 | orchestrator | 2025-06-01 22:17:29.647918 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-01 22:17:29.702912 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:17:29.702966 | orchestrator | 2025-06-01 22:17:29.702973 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-01 22:17:31.102102 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:31.102177 | orchestrator | 2025-06-01 22:17:31.102187 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-01 22:17:31.674944 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:31.674993 | orchestrator | 2025-06-01 22:17:31.675003 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-01 22:17:32.853010 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:32.853061 | orchestrator | 2025-06-01 22:17:32.853069 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-01 22:17:46.253540 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:46.253587 | orchestrator | 2025-06-01 22:17:46.253593 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-01 22:17:46.902165 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:46.902219 | orchestrator | 2025-06-01 22:17:46.902226 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-01 22:17:46.957305 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:17:46.957338 | orchestrator | 2025-06-01 22:17:46.957343 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-01 22:17:47.991727 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:47.991795 | orchestrator | 2025-06-01 22:17:47.991810 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-01 22:17:49.016686 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:49.016867 | orchestrator | 2025-06-01 22:17:49.016886 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-01 22:17:49.591096 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:49.591196 | orchestrator | 2025-06-01 22:17:49.591213 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-01 22:17:49.630939 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-01 22:17:49.631058 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-01 22:17:49.631075 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-01 22:17:49.631087 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-01 22:17:52.099095 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:52.099216 | orchestrator | 2025-06-01 22:17:52.099234 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-01 22:18:01.152062 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-01 22:18:01.152108 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-01 22:18:01.152119 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-01 22:18:01.152126 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-01 22:18:01.152137 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-01 22:18:01.152200 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-01 22:18:01.152209 | orchestrator | 2025-06-01 22:18:01.152217 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-01 22:18:02.222741 | orchestrator | changed: [testbed-manager] 2025-06-01 22:18:02.222826 | orchestrator | 2025-06-01 22:18:02.222842 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-01 22:18:02.262379 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:18:02.262472 | orchestrator | 2025-06-01 22:18:02.262489 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-01 22:18:05.404678 | orchestrator | changed: [testbed-manager] 2025-06-01 22:18:05.404792 | orchestrator | 2025-06-01 22:18:05.404817 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-01 22:18:05.451044 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:18:05.451123 | orchestrator | 2025-06-01 22:18:05.451138 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-01 22:19:47.768167 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:47.768247 | orchestrator | 2025-06-01 22:19:47.768256 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-01 22:19:48.941841 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:48.941948 | orchestrator | 2025-06-01 22:19:48.941976 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:19:48.942000 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-01 22:19:48.942048 | orchestrator | 2025-06-01 22:19:49.179923 | orchestrator | ok: Runtime: 0:02:24.171678 2025-06-01 22:19:49.195884 | 2025-06-01 22:19:49.196042 | TASK [Reboot manager] 2025-06-01 22:19:50.731486 | orchestrator | ok: Runtime: 0:00:00.975021 2025-06-01 22:19:50.746042 | 2025-06-01 22:19:50.746217 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-01 22:20:07.156026 | orchestrator | ok 2025-06-01 22:20:07.168581 | 2025-06-01 22:20:07.168833 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-01 22:21:07.219751 | orchestrator | ok 2025-06-01 22:21:07.229888 | 2025-06-01 22:21:07.230037 | TASK [Deploy manager + bootstrap nodes] 2025-06-01 22:21:09.892781 | orchestrator | 2025-06-01 22:21:09.892980 | orchestrator | # DEPLOY MANAGER 2025-06-01 22:21:09.893004 | orchestrator | 2025-06-01 22:21:09.893019 | orchestrator | + set -e 2025-06-01 22:21:09.893032 | orchestrator | + echo 2025-06-01 22:21:09.893046 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-01 22:21:09.893063 | orchestrator | + echo 2025-06-01 22:21:09.893114 | orchestrator | + cat /opt/manager-vars.sh 2025-06-01 22:21:09.897013 | orchestrator | export NUMBER_OF_NODES=6 2025-06-01 22:21:09.897126 | orchestrator | 2025-06-01 22:21:09.897145 | orchestrator | export CEPH_VERSION=reef 2025-06-01 22:21:09.897160 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-01 22:21:09.897173 | orchestrator | export MANAGER_VERSION=9.1.0 2025-06-01 22:21:09.897206 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-01 22:21:09.897297 | orchestrator | 2025-06-01 22:21:09.897337 | orchestrator | export ARA=false 2025-06-01 22:21:09.897356 | orchestrator | export DEPLOY_MODE=manager 2025-06-01 22:21:09.897383 | orchestrator | export TEMPEST=false 2025-06-01 22:21:09.897404 | orchestrator | export IS_ZUUL=true 2025-06-01 22:21:09.897415 | orchestrator | 2025-06-01 22:21:09.897433 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-06-01 22:21:09.897445 | orchestrator | export EXTERNAL_API=false 2025-06-01 22:21:09.897456 | orchestrator | 2025-06-01 22:21:09.897467 | orchestrator | export IMAGE_USER=ubuntu 2025-06-01 22:21:09.897481 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-01 22:21:09.897492 | orchestrator | 2025-06-01 22:21:09.897503 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-01 22:21:09.897527 | orchestrator | 2025-06-01 22:21:09.897540 | orchestrator | + echo 2025-06-01 22:21:09.897566 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 22:21:09.898156 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 22:21:09.898193 | orchestrator | ++ INTERACTIVE=false 2025-06-01 22:21:09.898213 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 22:21:09.898226 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 22:21:09.898491 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 22:21:09.898508 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 22:21:09.898525 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 22:21:09.898536 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 22:21:09.898547 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 22:21:09.898558 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 22:21:09.898569 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 22:21:09.898580 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-01 22:21:09.898591 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-01 22:21:09.898602 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 22:21:09.898625 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 22:21:09.898636 | orchestrator | ++ export ARA=false 2025-06-01 22:21:09.898647 | orchestrator | ++ ARA=false 2025-06-01 22:21:09.898658 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 22:21:09.898668 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 22:21:09.898679 | orchestrator | ++ export TEMPEST=false 2025-06-01 22:21:09.898690 | orchestrator | ++ TEMPEST=false 2025-06-01 22:21:09.898700 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 22:21:09.898711 | orchestrator | ++ IS_ZUUL=true 2025-06-01 22:21:09.898722 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-06-01 22:21:09.898732 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-06-01 22:21:09.898743 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 22:21:09.898754 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 22:21:09.898765 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 22:21:09.898780 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 22:21:09.898791 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 22:21:09.898802 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 22:21:09.898813 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 22:21:09.898823 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 22:21:09.898834 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-01 22:21:09.954179 | orchestrator | + docker version 2025-06-01 22:21:10.232098 | orchestrator | Client: Docker Engine - Community 2025-06-01 22:21:10.232196 | orchestrator | Version: 27.5.1 2025-06-01 22:21:10.232211 | orchestrator | API version: 1.47 2025-06-01 22:21:10.232223 | orchestrator | Go version: go1.22.11 2025-06-01 22:21:10.232281 | orchestrator | Git commit: 9f9e405 2025-06-01 22:21:10.232295 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-01 22:21:10.232308 | orchestrator | OS/Arch: linux/amd64 2025-06-01 22:21:10.232318 | orchestrator | Context: default 2025-06-01 22:21:10.232329 | orchestrator | 2025-06-01 22:21:10.232341 | orchestrator | Server: Docker Engine - Community 2025-06-01 22:21:10.232352 | orchestrator | Engine: 2025-06-01 22:21:10.232364 | orchestrator | Version: 27.5.1 2025-06-01 22:21:10.232375 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-01 22:21:10.232415 | orchestrator | Go version: go1.22.11 2025-06-01 22:21:10.232426 | orchestrator | Git commit: 4c9b3b0 2025-06-01 22:21:10.232437 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-01 22:21:10.232448 | orchestrator | OS/Arch: linux/amd64 2025-06-01 22:21:10.232459 | orchestrator | Experimental: false 2025-06-01 22:21:10.232470 | orchestrator | containerd: 2025-06-01 22:21:10.232481 | orchestrator | Version: 1.7.27 2025-06-01 22:21:10.232492 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-01 22:21:10.232504 | orchestrator | runc: 2025-06-01 22:21:10.232514 | orchestrator | Version: 1.2.5 2025-06-01 22:21:10.232525 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-01 22:21:10.232536 | orchestrator | docker-init: 2025-06-01 22:21:10.232547 | orchestrator | Version: 0.19.0 2025-06-01 22:21:10.232559 | orchestrator | GitCommit: de40ad0 2025-06-01 22:21:10.235456 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-01 22:21:10.244151 | orchestrator | + set -e 2025-06-01 22:21:10.244176 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 22:21:10.244187 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 22:21:10.244198 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 22:21:10.244209 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 22:21:10.244220 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 22:21:10.244231 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 22:21:10.244266 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 22:21:10.244278 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-01 22:21:10.244289 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-01 22:21:10.244300 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 22:21:10.244311 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 22:21:10.244322 | orchestrator | ++ export ARA=false 2025-06-01 22:21:10.244333 | orchestrator | ++ ARA=false 2025-06-01 22:21:10.244353 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 22:21:10.244363 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 22:21:10.244374 | orchestrator | ++ export TEMPEST=false 2025-06-01 22:21:10.244385 | orchestrator | ++ TEMPEST=false 2025-06-01 22:21:10.244396 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 22:21:10.244406 | orchestrator | ++ IS_ZUUL=true 2025-06-01 22:21:10.244418 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-06-01 22:21:10.244429 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-06-01 22:21:10.244440 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 22:21:10.244451 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 22:21:10.244468 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 22:21:10.244479 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 22:21:10.244496 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 22:21:10.244508 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 22:21:10.244519 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 22:21:10.244529 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 22:21:10.244540 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 22:21:10.244551 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 22:21:10.244562 | orchestrator | ++ INTERACTIVE=false 2025-06-01 22:21:10.244572 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 22:21:10.244588 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 22:21:10.244603 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-01 22:21:10.244614 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-06-01 22:21:10.252650 | orchestrator | + set -e 2025-06-01 22:21:10.252678 | orchestrator | + VERSION=9.1.0 2025-06-01 22:21:10.252691 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-06-01 22:21:10.261392 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-01 22:21:10.261419 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-01 22:21:10.265534 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-01 22:21:10.268856 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-06-01 22:21:10.277492 | orchestrator | /opt/configuration ~ 2025-06-01 22:21:10.277518 | orchestrator | + set -e 2025-06-01 22:21:10.277530 | orchestrator | + pushd /opt/configuration 2025-06-01 22:21:10.277541 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-01 22:21:10.279675 | orchestrator | + source /opt/venv/bin/activate 2025-06-01 22:21:10.280964 | orchestrator | ++ deactivate nondestructive 2025-06-01 22:21:10.280982 | orchestrator | ++ '[' -n '' ']' 2025-06-01 22:21:10.280995 | orchestrator | ++ '[' -n '' ']' 2025-06-01 22:21:10.281024 | orchestrator | ++ hash -r 2025-06-01 22:21:10.281035 | orchestrator | ++ '[' -n '' ']' 2025-06-01 22:21:10.281046 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-01 22:21:10.281057 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-01 22:21:10.281068 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-01 22:21:10.281091 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-01 22:21:10.281102 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-01 22:21:10.281118 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-01 22:21:10.281130 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-01 22:21:10.281142 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-01 22:21:10.281163 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-01 22:21:10.281175 | orchestrator | ++ export PATH 2025-06-01 22:21:10.281186 | orchestrator | ++ '[' -n '' ']' 2025-06-01 22:21:10.281216 | orchestrator | ++ '[' -z '' ']' 2025-06-01 22:21:10.281228 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-01 22:21:10.281257 | orchestrator | ++ PS1='(venv) ' 2025-06-01 22:21:10.281268 | orchestrator | ++ export PS1 2025-06-01 22:21:10.281279 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-01 22:21:10.281290 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-01 22:21:10.281361 | orchestrator | ++ hash -r 2025-06-01 22:21:10.281375 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-06-01 22:21:11.704014 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-06-01 22:21:11.827444 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-06-01 22:21:11.827516 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-06-01 22:21:11.827550 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-06-01 22:21:11.827563 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-06-01 22:21:11.827575 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-06-01 22:21:11.827586 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-06-01 22:21:11.827598 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-06-01 22:21:11.827609 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-06-01 22:21:11.827621 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-06-01 22:21:11.827633 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-06-01 22:21:11.827644 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-06-01 22:21:11.827655 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-06-01 22:21:11.827666 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-06-01 22:21:11.989518 | orchestrator | ++ which gilt 2025-06-01 22:21:11.993985 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-06-01 22:21:11.994087 | orchestrator | + /opt/venv/bin/gilt overlay 2025-06-01 22:21:12.245913 | orchestrator | osism.cfg-generics: 2025-06-01 22:21:12.426771 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-06-01 22:21:12.426866 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-06-01 22:21:12.426943 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-06-01 22:21:12.426961 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-06-01 22:21:13.332044 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-06-01 22:21:13.339518 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-06-01 22:21:13.676308 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-06-01 22:21:13.736710 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-01 22:21:13.736788 | orchestrator | + deactivate 2025-06-01 22:21:13.736804 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-01 22:21:13.736817 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-01 22:21:13.736828 | orchestrator | + export PATH 2025-06-01 22:21:13.736839 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-01 22:21:13.736851 | orchestrator | + '[' -n '' ']' 2025-06-01 22:21:13.736864 | orchestrator | + hash -r 2025-06-01 22:21:13.736875 | orchestrator | + '[' -n '' ']' 2025-06-01 22:21:13.736886 | orchestrator | + unset VIRTUAL_ENV 2025-06-01 22:21:13.736897 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-01 22:21:13.736908 | orchestrator | ~ 2025-06-01 22:21:13.736919 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-01 22:21:13.736930 | orchestrator | + unset -f deactivate 2025-06-01 22:21:13.736941 | orchestrator | + popd 2025-06-01 22:21:13.738736 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-01 22:21:13.738772 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-01 22:21:13.739385 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-01 22:21:13.805721 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-01 22:21:13.805813 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-01 22:21:13.805828 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-01 22:21:13.854790 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-01 22:21:13.854877 | orchestrator | + source /opt/venv/bin/activate 2025-06-01 22:21:13.854890 | orchestrator | ++ deactivate nondestructive 2025-06-01 22:21:13.854901 | orchestrator | ++ '[' -n '' ']' 2025-06-01 22:21:13.854912 | orchestrator | ++ '[' -n '' ']' 2025-06-01 22:21:13.854923 | orchestrator | ++ hash -r 2025-06-01 22:21:13.854934 | orchestrator | ++ '[' -n '' ']' 2025-06-01 22:21:13.854945 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-01 22:21:13.854955 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-01 22:21:13.854966 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-01 22:21:13.855326 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-01 22:21:13.855348 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-01 22:21:13.855359 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-01 22:21:13.855370 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-01 22:21:13.855724 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-01 22:21:13.855747 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-01 22:21:13.855780 | orchestrator | ++ export PATH 2025-06-01 22:21:13.855791 | orchestrator | ++ '[' -n '' ']' 2025-06-01 22:21:13.855802 | orchestrator | ++ '[' -z '' ']' 2025-06-01 22:21:13.855812 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-01 22:21:13.855823 | orchestrator | ++ PS1='(venv) ' 2025-06-01 22:21:13.855834 | orchestrator | ++ export PS1 2025-06-01 22:21:13.855845 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-01 22:21:13.855855 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-01 22:21:13.855874 | orchestrator | ++ hash -r 2025-06-01 22:21:13.855952 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-01 22:21:15.270290 | orchestrator | 2025-06-01 22:21:15.270398 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-01 22:21:15.270420 | orchestrator | 2025-06-01 22:21:15.270432 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-01 22:21:15.901382 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:15.901490 | orchestrator | 2025-06-01 22:21:15.901508 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-01 22:21:16.928469 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:16.928584 | orchestrator | 2025-06-01 22:21:16.928600 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-01 22:21:16.928613 | orchestrator | 2025-06-01 22:21:16.975323 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:21:19.368614 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:19.368736 | orchestrator | 2025-06-01 22:21:19.368754 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-01 22:21:19.424353 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:19.424444 | orchestrator | 2025-06-01 22:21:19.424459 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-01 22:21:19.975337 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:19.975435 | orchestrator | 2025-06-01 22:21:19.975454 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-01 22:21:20.013450 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:21:20.013514 | orchestrator | 2025-06-01 22:21:20.013528 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-01 22:21:20.361582 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:20.361678 | orchestrator | 2025-06-01 22:21:20.361693 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-01 22:21:20.421731 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:21:20.421808 | orchestrator | 2025-06-01 22:21:20.421821 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-01 22:21:20.765026 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:20.765134 | orchestrator | 2025-06-01 22:21:20.765150 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-01 22:21:20.886891 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:21:20.886978 | orchestrator | 2025-06-01 22:21:20.886992 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-01 22:21:20.887004 | orchestrator | 2025-06-01 22:21:20.887016 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:21:22.793042 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:22.793157 | orchestrator | 2025-06-01 22:21:22.793173 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-01 22:21:22.901839 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-01 22:21:22.901956 | orchestrator | 2025-06-01 22:21:22.901983 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-01 22:21:22.955945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-01 22:21:22.956027 | orchestrator | 2025-06-01 22:21:22.956039 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-01 22:21:24.081407 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-01 22:21:24.081511 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-01 22:21:24.081528 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-01 22:21:24.081540 | orchestrator | 2025-06-01 22:21:24.081552 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-01 22:21:26.041823 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-01 22:21:26.041926 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-01 22:21:26.041941 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-01 22:21:26.041955 | orchestrator | 2025-06-01 22:21:26.041968 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-01 22:21:26.721587 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 22:21:26.721686 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:26.721700 | orchestrator | 2025-06-01 22:21:26.721713 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-01 22:21:27.407427 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 22:21:27.407535 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:27.407551 | orchestrator | 2025-06-01 22:21:27.407564 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-01 22:21:27.464734 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:21:27.464819 | orchestrator | 2025-06-01 22:21:27.464833 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-01 22:21:27.841007 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:27.841101 | orchestrator | 2025-06-01 22:21:27.841115 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-01 22:21:27.922440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-01 22:21:27.922537 | orchestrator | 2025-06-01 22:21:27.922553 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-01 22:21:29.000759 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:29.000866 | orchestrator | 2025-06-01 22:21:29.000883 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-01 22:21:29.876623 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:29.876722 | orchestrator | 2025-06-01 22:21:29.876737 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-01 22:21:41.717504 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:41.717662 | orchestrator | 2025-06-01 22:21:41.717704 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-01 22:21:41.770416 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:21:41.770520 | orchestrator | 2025-06-01 22:21:41.770537 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-01 22:21:41.770550 | orchestrator | 2025-06-01 22:21:41.770562 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:21:43.559672 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:43.559805 | orchestrator | 2025-06-01 22:21:43.559822 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-01 22:21:43.655301 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-01 22:21:43.655420 | orchestrator | 2025-06-01 22:21:43.655434 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-01 22:21:43.725397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 22:21:43.725478 | orchestrator | 2025-06-01 22:21:43.725494 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-01 22:21:46.012524 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:46.012637 | orchestrator | 2025-06-01 22:21:46.012653 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-01 22:21:46.067649 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:46.067683 | orchestrator | 2025-06-01 22:21:46.067696 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-01 22:21:46.181626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-01 22:21:46.181723 | orchestrator | 2025-06-01 22:21:46.181731 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-01 22:21:49.054343 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-01 22:21:49.054473 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-01 22:21:49.054487 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-01 22:21:49.054500 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-01 22:21:49.054511 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-01 22:21:49.054523 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-01 22:21:49.054534 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-01 22:21:49.054545 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-01 22:21:49.054557 | orchestrator | 2025-06-01 22:21:49.054572 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-01 22:21:49.718317 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:49.718443 | orchestrator | 2025-06-01 22:21:49.718460 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-01 22:21:50.364000 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:50.364129 | orchestrator | 2025-06-01 22:21:50.364145 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-01 22:21:50.443141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-01 22:21:50.443203 | orchestrator | 2025-06-01 22:21:50.443221 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-01 22:21:51.714218 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-01 22:21:51.714418 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-01 22:21:51.714434 | orchestrator | 2025-06-01 22:21:51.715179 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-01 22:21:52.374718 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:52.374851 | orchestrator | 2025-06-01 22:21:52.374867 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-01 22:21:52.429081 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:21:52.429204 | orchestrator | 2025-06-01 22:21:52.429220 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-01 22:21:52.487634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-01 22:21:52.487747 | orchestrator | 2025-06-01 22:21:52.487763 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-01 22:21:53.903549 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 22:21:53.903688 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 22:21:53.903705 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:53.903719 | orchestrator | 2025-06-01 22:21:53.903732 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-01 22:21:54.555323 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:54.555449 | orchestrator | 2025-06-01 22:21:54.555465 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-01 22:21:54.608406 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:21:54.608504 | orchestrator | 2025-06-01 22:21:54.608519 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-01 22:21:54.697341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-01 22:21:54.697457 | orchestrator | 2025-06-01 22:21:54.697471 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-01 22:21:55.279835 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:55.279952 | orchestrator | 2025-06-01 22:21:55.279968 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-01 22:21:55.733474 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:55.733593 | orchestrator | 2025-06-01 22:21:55.733605 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-01 22:21:57.056946 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-01 22:21:57.057100 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-01 22:21:57.057117 | orchestrator | 2025-06-01 22:21:57.057146 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-01 22:21:57.724978 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:57.725091 | orchestrator | 2025-06-01 22:21:57.725108 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-01 22:21:58.144967 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:58.145166 | orchestrator | 2025-06-01 22:21:58.145185 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-01 22:21:58.536342 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:58.536511 | orchestrator | 2025-06-01 22:21:58.536527 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-01 22:21:58.574358 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:21:58.574450 | orchestrator | 2025-06-01 22:21:58.574463 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-01 22:21:58.651015 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-01 22:21:58.651113 | orchestrator | 2025-06-01 22:21:58.651126 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-01 22:21:58.701973 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:58.702087 | orchestrator | 2025-06-01 22:21:58.702116 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-01 22:22:00.808726 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-01 22:22:00.808862 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-01 22:22:00.808877 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-01 22:22:00.808888 | orchestrator | 2025-06-01 22:22:00.808900 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-01 22:22:01.531314 | orchestrator | changed: [testbed-manager] 2025-06-01 22:22:01.531419 | orchestrator | 2025-06-01 22:22:01.531434 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-01 22:22:02.291684 | orchestrator | changed: [testbed-manager] 2025-06-01 22:22:02.291812 | orchestrator | 2025-06-01 22:22:02.291830 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-01 22:22:03.028741 | orchestrator | changed: [testbed-manager] 2025-06-01 22:22:03.028845 | orchestrator | 2025-06-01 22:22:03.028862 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-01 22:22:03.106713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-01 22:22:03.106790 | orchestrator | 2025-06-01 22:22:03.106803 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-01 22:22:03.152968 | orchestrator | ok: [testbed-manager] 2025-06-01 22:22:03.153003 | orchestrator | 2025-06-01 22:22:03.153017 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-01 22:22:03.896048 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-01 22:22:03.896134 | orchestrator | 2025-06-01 22:22:03.896143 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-01 22:22:03.991925 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-01 22:22:03.992024 | orchestrator | 2025-06-01 22:22:03.992048 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-01 22:22:04.734989 | orchestrator | changed: [testbed-manager] 2025-06-01 22:22:04.735087 | orchestrator | 2025-06-01 22:22:04.735101 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-01 22:22:05.376241 | orchestrator | ok: [testbed-manager] 2025-06-01 22:22:05.376383 | orchestrator | 2025-06-01 22:22:05.376399 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-01 22:22:05.442251 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:22:05.442377 | orchestrator | 2025-06-01 22:22:05.442390 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-01 22:22:05.500102 | orchestrator | ok: [testbed-manager] 2025-06-01 22:22:05.500188 | orchestrator | 2025-06-01 22:22:05.500211 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-01 22:22:06.372059 | orchestrator | changed: [testbed-manager] 2025-06-01 22:22:06.372184 | orchestrator | 2025-06-01 22:22:06.372209 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-01 22:23:11.923990 | orchestrator | changed: [testbed-manager] 2025-06-01 22:23:11.924087 | orchestrator | 2025-06-01 22:23:11.924101 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-01 22:23:12.966573 | orchestrator | ok: [testbed-manager] 2025-06-01 22:23:12.966671 | orchestrator | 2025-06-01 22:23:12.966687 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-01 22:23:13.022951 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:23:13.023021 | orchestrator | 2025-06-01 22:23:13.023037 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-01 22:23:15.849398 | orchestrator | changed: [testbed-manager] 2025-06-01 22:23:15.849493 | orchestrator | 2025-06-01 22:23:15.849507 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-01 22:23:15.916275 | orchestrator | ok: [testbed-manager] 2025-06-01 22:23:15.916335 | orchestrator | 2025-06-01 22:23:15.916348 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-01 22:23:15.916360 | orchestrator | 2025-06-01 22:23:15.916371 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-01 22:23:15.990204 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:23:15.990263 | orchestrator | 2025-06-01 22:23:15.990304 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-01 22:24:16.051134 | orchestrator | Pausing for 60 seconds 2025-06-01 22:24:16.051246 | orchestrator | changed: [testbed-manager] 2025-06-01 22:24:16.051262 | orchestrator | 2025-06-01 22:24:16.051275 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-01 22:24:20.184681 | orchestrator | changed: [testbed-manager] 2025-06-01 22:24:20.184799 | orchestrator | 2025-06-01 22:24:20.184817 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-01 22:25:01.880929 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-01 22:25:01.881044 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-01 22:25:01.881061 | orchestrator | changed: [testbed-manager] 2025-06-01 22:25:01.881074 | orchestrator | 2025-06-01 22:25:01.881086 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-01 22:25:11.253503 | orchestrator | changed: [testbed-manager] 2025-06-01 22:25:11.253629 | orchestrator | 2025-06-01 22:25:11.253670 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-01 22:25:11.335196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-01 22:25:11.335293 | orchestrator | 2025-06-01 22:25:11.335308 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-01 22:25:11.335321 | orchestrator | 2025-06-01 22:25:11.335333 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-01 22:25:11.393694 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:25:11.393760 | orchestrator | 2025-06-01 22:25:11.393773 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:25:11.393786 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-01 22:25:11.393798 | orchestrator | 2025-06-01 22:25:11.501164 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-01 22:25:11.501248 | orchestrator | + deactivate 2025-06-01 22:25:11.501272 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-01 22:25:11.501292 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-01 22:25:11.501309 | orchestrator | + export PATH 2025-06-01 22:25:11.501331 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-01 22:25:11.501349 | orchestrator | + '[' -n '' ']' 2025-06-01 22:25:11.501436 | orchestrator | + hash -r 2025-06-01 22:25:11.501455 | orchestrator | + '[' -n '' ']' 2025-06-01 22:25:11.501472 | orchestrator | + unset VIRTUAL_ENV 2025-06-01 22:25:11.501489 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-01 22:25:11.501508 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-01 22:25:11.501524 | orchestrator | + unset -f deactivate 2025-06-01 22:25:11.501541 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-01 22:25:11.507527 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-01 22:25:11.507584 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-01 22:25:11.507599 | orchestrator | + local max_attempts=60 2025-06-01 22:25:11.507613 | orchestrator | + local name=ceph-ansible 2025-06-01 22:25:11.507629 | orchestrator | + local attempt_num=1 2025-06-01 22:25:11.508466 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-01 22:25:11.550097 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 22:25:11.550143 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-01 22:25:11.550154 | orchestrator | + local max_attempts=60 2025-06-01 22:25:11.550163 | orchestrator | + local name=kolla-ansible 2025-06-01 22:25:11.550171 | orchestrator | + local attempt_num=1 2025-06-01 22:25:11.551304 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-01 22:25:11.599497 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 22:25:11.599585 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-01 22:25:11.599601 | orchestrator | + local max_attempts=60 2025-06-01 22:25:11.599613 | orchestrator | + local name=osism-ansible 2025-06-01 22:25:11.599625 | orchestrator | + local attempt_num=1 2025-06-01 22:25:11.600633 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-01 22:25:11.646518 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 22:25:11.646580 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-01 22:25:11.646592 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-01 22:25:12.371782 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-01 22:25:12.590467 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-01 22:25:12.590557 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-01 22:25:12.590573 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-01 22:25:12.590585 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-01 22:25:12.590598 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-01 22:25:12.590609 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-01 22:25:12.590620 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-01 22:25:12.590631 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-06-01 22:25:12.590641 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-01 22:25:12.590652 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-01 22:25:12.590662 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-01 22:25:12.590673 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-01 22:25:12.590684 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-06-01 22:25:12.590694 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-01 22:25:12.590705 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-01 22:25:12.590716 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-01 22:25:12.596555 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-01 22:25:12.655114 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-01 22:25:12.655190 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-01 22:25:12.660061 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-01 22:25:14.395648 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:25:14.395748 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:25:14.395762 | orchestrator | Registering Redlock._release_script 2025-06-01 22:25:14.599865 | orchestrator | 2025-06-01 22:25:14 | INFO  | Task 4962a600-f419-468b-a2b8-e3a79cc61f74 (resolvconf) was prepared for execution. 2025-06-01 22:25:14.599957 | orchestrator | 2025-06-01 22:25:14 | INFO  | It takes a moment until task 4962a600-f419-468b-a2b8-e3a79cc61f74 (resolvconf) has been started and output is visible here. 2025-06-01 22:25:18.635462 | orchestrator | 2025-06-01 22:25:18.635596 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-01 22:25:18.636407 | orchestrator | 2025-06-01 22:25:18.637408 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:25:18.639192 | orchestrator | Sunday 01 June 2025 22:25:18 +0000 (0:00:00.149) 0:00:00.149 *********** 2025-06-01 22:25:22.504870 | orchestrator | ok: [testbed-manager] 2025-06-01 22:25:22.505218 | orchestrator | 2025-06-01 22:25:22.505911 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-01 22:25:22.507413 | orchestrator | Sunday 01 June 2025 22:25:22 +0000 (0:00:03.872) 0:00:04.022 *********** 2025-06-01 22:25:22.575783 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:25:22.575872 | orchestrator | 2025-06-01 22:25:22.576432 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-01 22:25:22.577164 | orchestrator | Sunday 01 June 2025 22:25:22 +0000 (0:00:00.070) 0:00:04.092 *********** 2025-06-01 22:25:22.660801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-01 22:25:22.661556 | orchestrator | 2025-06-01 22:25:22.662359 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-01 22:25:22.663513 | orchestrator | Sunday 01 June 2025 22:25:22 +0000 (0:00:00.086) 0:00:04.179 *********** 2025-06-01 22:25:22.747693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 22:25:22.748205 | orchestrator | 2025-06-01 22:25:22.750098 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-01 22:25:22.751142 | orchestrator | Sunday 01 June 2025 22:25:22 +0000 (0:00:00.086) 0:00:04.265 *********** 2025-06-01 22:25:23.858242 | orchestrator | ok: [testbed-manager] 2025-06-01 22:25:23.859422 | orchestrator | 2025-06-01 22:25:23.861944 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-01 22:25:23.862848 | orchestrator | Sunday 01 June 2025 22:25:23 +0000 (0:00:01.109) 0:00:05.374 *********** 2025-06-01 22:25:23.929150 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:25:23.929236 | orchestrator | 2025-06-01 22:25:23.929299 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-01 22:25:23.930100 | orchestrator | Sunday 01 June 2025 22:25:23 +0000 (0:00:00.070) 0:00:05.445 *********** 2025-06-01 22:25:24.408512 | orchestrator | ok: [testbed-manager] 2025-06-01 22:25:24.408689 | orchestrator | 2025-06-01 22:25:24.409716 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-01 22:25:24.410841 | orchestrator | Sunday 01 June 2025 22:25:24 +0000 (0:00:00.480) 0:00:05.926 *********** 2025-06-01 22:25:24.476201 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:25:24.476293 | orchestrator | 2025-06-01 22:25:24.476908 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-01 22:25:24.477060 | orchestrator | Sunday 01 June 2025 22:25:24 +0000 (0:00:00.068) 0:00:05.994 *********** 2025-06-01 22:25:24.991690 | orchestrator | changed: [testbed-manager] 2025-06-01 22:25:24.993447 | orchestrator | 2025-06-01 22:25:24.994345 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-01 22:25:24.995454 | orchestrator | Sunday 01 June 2025 22:25:24 +0000 (0:00:00.513) 0:00:06.508 *********** 2025-06-01 22:25:26.076218 | orchestrator | changed: [testbed-manager] 2025-06-01 22:25:26.076595 | orchestrator | 2025-06-01 22:25:26.077309 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-01 22:25:26.078317 | orchestrator | Sunday 01 June 2025 22:25:26 +0000 (0:00:01.082) 0:00:07.590 *********** 2025-06-01 22:25:27.050500 | orchestrator | ok: [testbed-manager] 2025-06-01 22:25:27.051045 | orchestrator | 2025-06-01 22:25:27.051949 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-01 22:25:27.052786 | orchestrator | Sunday 01 June 2025 22:25:27 +0000 (0:00:00.975) 0:00:08.566 *********** 2025-06-01 22:25:27.134824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-01 22:25:27.135258 | orchestrator | 2025-06-01 22:25:27.136490 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-01 22:25:27.136936 | orchestrator | Sunday 01 June 2025 22:25:27 +0000 (0:00:00.086) 0:00:08.652 *********** 2025-06-01 22:25:28.307645 | orchestrator | changed: [testbed-manager] 2025-06-01 22:25:28.307861 | orchestrator | 2025-06-01 22:25:28.309553 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:25:28.309581 | orchestrator | 2025-06-01 22:25:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:25:28.309595 | orchestrator | 2025-06-01 22:25:28 | INFO  | Please wait and do not abort execution. 2025-06-01 22:25:28.311061 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 22:25:28.311582 | orchestrator | 2025-06-01 22:25:28.312309 | orchestrator | 2025-06-01 22:25:28.313316 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:25:28.314077 | orchestrator | Sunday 01 June 2025 22:25:28 +0000 (0:00:01.172) 0:00:09.825 *********** 2025-06-01 22:25:28.314455 | orchestrator | =============================================================================== 2025-06-01 22:25:28.314860 | orchestrator | Gathering Facts --------------------------------------------------------- 3.87s 2025-06-01 22:25:28.315285 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2025-06-01 22:25:28.316017 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.11s 2025-06-01 22:25:28.316743 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2025-06-01 22:25:28.316764 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2025-06-01 22:25:28.317164 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.51s 2025-06-01 22:25:28.317702 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-06-01 22:25:28.318153 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-06-01 22:25:28.318602 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-06-01 22:25:28.318943 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-06-01 22:25:28.319442 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-06-01 22:25:28.319885 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-06-01 22:25:28.320344 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-06-01 22:25:28.779824 | orchestrator | + osism apply sshconfig 2025-06-01 22:25:30.449299 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:25:30.449440 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:25:30.449458 | orchestrator | Registering Redlock._release_script 2025-06-01 22:25:30.514335 | orchestrator | 2025-06-01 22:25:30 | INFO  | Task e1709c9e-809c-49bf-a83d-699e4b77b2b8 (sshconfig) was prepared for execution. 2025-06-01 22:25:30.514489 | orchestrator | 2025-06-01 22:25:30 | INFO  | It takes a moment until task e1709c9e-809c-49bf-a83d-699e4b77b2b8 (sshconfig) has been started and output is visible here. 2025-06-01 22:25:34.570838 | orchestrator | 2025-06-01 22:25:34.572673 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-01 22:25:34.573241 | orchestrator | 2025-06-01 22:25:34.575036 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-01 22:25:34.575942 | orchestrator | Sunday 01 June 2025 22:25:34 +0000 (0:00:00.189) 0:00:00.189 *********** 2025-06-01 22:25:35.145820 | orchestrator | ok: [testbed-manager] 2025-06-01 22:25:35.146361 | orchestrator | 2025-06-01 22:25:35.146835 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-01 22:25:35.147723 | orchestrator | Sunday 01 June 2025 22:25:35 +0000 (0:00:00.578) 0:00:00.768 *********** 2025-06-01 22:25:35.669605 | orchestrator | changed: [testbed-manager] 2025-06-01 22:25:35.670904 | orchestrator | 2025-06-01 22:25:35.673483 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-01 22:25:35.673511 | orchestrator | Sunday 01 June 2025 22:25:35 +0000 (0:00:00.524) 0:00:01.292 *********** 2025-06-01 22:25:41.548535 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-01 22:25:41.548655 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-01 22:25:41.549157 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-01 22:25:41.550289 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-01 22:25:41.551454 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-01 22:25:41.554080 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-01 22:25:41.554105 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-01 22:25:41.554117 | orchestrator | 2025-06-01 22:25:41.554130 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-01 22:25:41.554328 | orchestrator | Sunday 01 June 2025 22:25:41 +0000 (0:00:05.875) 0:00:07.168 *********** 2025-06-01 22:25:41.626926 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:25:41.627368 | orchestrator | 2025-06-01 22:25:41.627926 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-01 22:25:41.629256 | orchestrator | Sunday 01 June 2025 22:25:41 +0000 (0:00:00.079) 0:00:07.247 *********** 2025-06-01 22:25:42.209332 | orchestrator | changed: [testbed-manager] 2025-06-01 22:25:42.210222 | orchestrator | 2025-06-01 22:25:42.210845 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:25:42.211404 | orchestrator | 2025-06-01 22:25:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:25:42.211430 | orchestrator | 2025-06-01 22:25:42 | INFO  | Please wait and do not abort execution. 2025-06-01 22:25:42.212511 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:25:42.213578 | orchestrator | 2025-06-01 22:25:42.214651 | orchestrator | 2025-06-01 22:25:42.215081 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:25:42.215707 | orchestrator | Sunday 01 June 2025 22:25:42 +0000 (0:00:00.582) 0:00:07.830 *********** 2025-06-01 22:25:42.216492 | orchestrator | =============================================================================== 2025-06-01 22:25:42.217217 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.88s 2025-06-01 22:25:42.218417 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-06-01 22:25:42.219442 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2025-06-01 22:25:42.220145 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2025-06-01 22:25:42.221569 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-06-01 22:25:42.676700 | orchestrator | + osism apply known-hosts 2025-06-01 22:25:44.438473 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:25:44.438573 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:25:44.438588 | orchestrator | Registering Redlock._release_script 2025-06-01 22:25:44.501779 | orchestrator | 2025-06-01 22:25:44 | INFO  | Task 974d718b-961a-4582-9864-172d130c88e8 (known-hosts) was prepared for execution. 2025-06-01 22:25:44.501868 | orchestrator | 2025-06-01 22:25:44 | INFO  | It takes a moment until task 974d718b-961a-4582-9864-172d130c88e8 (known-hosts) has been started and output is visible here. 2025-06-01 22:25:48.497812 | orchestrator | 2025-06-01 22:25:48.498814 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-01 22:25:48.501126 | orchestrator | 2025-06-01 22:25:48.501153 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-01 22:25:48.501819 | orchestrator | Sunday 01 June 2025 22:25:48 +0000 (0:00:00.170) 0:00:00.170 *********** 2025-06-01 22:25:54.554752 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-01 22:25:54.554939 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-01 22:25:54.555419 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-01 22:25:54.556122 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-01 22:25:54.556882 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-01 22:25:54.557372 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-01 22:25:54.558376 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-01 22:25:54.558427 | orchestrator | 2025-06-01 22:25:54.559248 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-01 22:25:54.559452 | orchestrator | Sunday 01 June 2025 22:25:54 +0000 (0:00:06.055) 0:00:06.225 *********** 2025-06-01 22:25:54.722734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-01 22:25:54.723189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-01 22:25:54.723574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-01 22:25:54.724237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-01 22:25:54.725038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-01 22:25:54.725429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-01 22:25:54.726408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-01 22:25:54.726432 | orchestrator | 2025-06-01 22:25:54.727402 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:25:54.727923 | orchestrator | Sunday 01 June 2025 22:25:54 +0000 (0:00:00.170) 0:00:06.396 *********** 2025-06-01 22:25:55.962496 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHpymBx2nj3a/+60KDqrMBXcIZgCmWPE2HudUPNHNinQ) 2025-06-01 22:25:55.963688 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWX8Jp5XHCDct4CxWcMIPpADmB/X+g1oRNdWi9n+DS7JcceorpHQXdoLfXrJPdaa/HYpir2wPYKHmG8WTQVowaun9bu5AGVeMRs+aoR4bI36Up3/b8PSC6UIwHDMrzzx2V24YYNFFGIk3tBiWhA8GTy6AVdtFp9hkqiI79Hw+4NIuCGiFWw3tp8UpUBRkFdhPxUe2pfsbUYgPPuPfMbmNfH29VdIYVxR+lQYZNt4l3mtiKq1B/Se0WoSpm+aGLFUhElEQ3SIWClEpEAcbUKehUP9w/zLjz3FAq6w+C5hwbQ0P5X6farMJx8M067Ak+AXRzOycrkmSW3X+wDVnS/UXC3aGaLMx5NU4DkMq97MtmYwUxy6VevchamF3ICJgmiEKuvTB7zWZBSQL9uMou/GEAUukptGgM6Kzqm7CL4xKteBnUDH0ZwI0PNQt9mCo35kUMbXAIJZx/fdyuSwwDhp0Kuoyx7aXbkVWh1OUfVjpgqWS/0oC/DlpOl6cqDrGCJx0=) 2025-06-01 22:25:55.964749 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDV3vNg6fN9h5ltjyBkAvz5Pxo/t7XCnA7tvYdEhwUOddIyVDzZOC1lSifsp9lsxpGv44o8jmKASuUvUOdT1KvE=) 2025-06-01 22:25:55.966279 | orchestrator | 2025-06-01 22:25:55.967585 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:25:55.968823 | orchestrator | Sunday 01 June 2025 22:25:55 +0000 (0:00:01.238) 0:00:07.634 *********** 2025-06-01 22:25:57.077062 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFOTrO8R5uirhr64kAgpflKXW1RWo/rNI74UKXBKCIHFFr67Q2i7qrqMpxFkKEhN2HBft6cUHb6UUJl5B/wMZIotixjddStfgTPAd3PrZ5PjH1S11Y7aoI/IHnfSEBZJHHY+nNkIWR9Su/WlkekAuNxz0ehUCovpynbrFjk+D6QNWWiRlwK7mKA1mLPi8Lkg/zdTextJk+YRmyNkbOb/eXsKkHadUzOZQZ5NBZXP0vg27oVrqmFNpdoYbL3M0/UFYT1rcTpMnV0vD4yi2dTr9TthDEmbLBljitEcYmRGLd+rmkHs9v6PV7+eVpwJj9bGr76iutr6OPD/S+fYGfqTx4wfxaB08ZLV10HgVAJxIOjFjjyPbNOijhkmcvj1IeuVmYowSQDrvtep/NzA4m0en1puLfcMDJUCT8IHtrkuYcDm5ZpY2SJw0KPrynhLuxoyk5MrSqJP+nkt6Uz52HMbKj8++o0bXjH2zlAQiB/C2RMpuYBk7hgxDdZIL+nXqPkB8=) 2025-06-01 22:25:57.077693 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFF+13UjM7qMCwH4Js6CLRYoAv3w321U+ODG/ra/q97t) 2025-06-01 22:25:57.078568 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKs3wYRz8MnN/TCHSIMDh1CxuZigEvk7xKLu27LMEU9p5InCvrI2iatfPLQyWsehgfKIH5Jbt8ynn0hx3aWm1g0=) 2025-06-01 22:25:57.079432 | orchestrator | 2025-06-01 22:25:57.080241 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:25:57.081011 | orchestrator | Sunday 01 June 2025 22:25:57 +0000 (0:00:01.114) 0:00:08.749 *********** 2025-06-01 22:25:58.154100 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDM5EvLHbokBpqU9LKnupIo9rSHd9iVOz+Wnwx8rSajWXmoZJtxbsDxOdqdWl+QAZoJElHq3ddTEn9HQ4jdxED2xOgUFLUy4NBGfl9+s5GA2mW7V/Yh7aajk47fXQ1n8mPMIWLbdzC08bp2mUDqsYiVajR0Ean8rw2XHg8SUxieoIMD3rR54dSDKheG5V/w6M5MO5BH5zAqo+4aP/tnX6Al6XW3f0QyZB3Vz/rjIKySR2m7ldCqZRHvjYEM5ujhNYiacRfBP3VeTjsfxNPpfilkBLwlGKKLEu7pWEBVEOM6AV3NX+39eaG0t8IpVY5RQdzVKxh8PdeNSAs2f2hYrLHGxxgdhHtDl4CthKyNHwfCcHk87ZY5Zf8bCXwsvBqJLL0b4+iq6E96XFR5jZaRtkHIMoMR5DDjO/fKpKyDSfvnxJsqjGLWQjs7iTxuMR3VbwO99B1Kuv87vv1gcxoHsXC1KRRqsiinH2aZ+2uv6/RYDqE+YHG+NjiKcFppRHF6sqM=) 2025-06-01 22:25:58.154209 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJR7AEkI+PQ1631HqTxnAK+zdFpAMOxyg2itFPU0HCDNd2dHuPP4yj6hhELegBG0yPU2LduIGc16IrVbHD3pGiQ=) 2025-06-01 22:25:58.154877 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHIvfi6ZQI6tw0KLmXO9SnBxehbDeww/QdgwZ9JYPDpa) 2025-06-01 22:25:58.155302 | orchestrator | 2025-06-01 22:25:58.155807 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:25:58.156223 | orchestrator | Sunday 01 June 2025 22:25:58 +0000 (0:00:01.076) 0:00:09.826 *********** 2025-06-01 22:25:59.222524 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMzRmJ/3qvW04l8yZj2v9GhN/xBL3p7klIQjUsgmWfJ4d6/2JCJCgm2bSYJ/GWaVmnf9g5ODUVbEbW7vkPnhra9ZFxUlv3f1hK+2sv3b2cmKPc4QSgSOkyj4Wd6bvZJxhJA5YztpCaR+y3+9Tn1ZSSw1XTDkEpE8eXKQmzh3n2NhDQolRgK7H9dBKQITMbiRiez3SWah5U/L0v8akClsFsO9wlzV7V+TPEOU8RpRX9mmRWB3JLWr/el2hfn+PcmzPke4x4d7qBbEJdBJfQTTUByPj2Li9PlPpIZceDa2pQiQHIQ2OXqKEeEkUkoorMg8lToYllq+qHhvaOPpCfH/XU5EeXAUYpXur80QqA01y9B5ghzXOH/hRVh7Q8y2hjvxo9oqD+gn+++dedE9uum8Im0tL4rxm7sdGE1h+lLY7ORy9K6XVqQ1O8JpUX1D0NbS4d1agpiYDMuUgW0OmswTXyKZ6cUdCBnifidTsXNXkIrjC5deeUdaUNJhnDXoQMqgM=) 2025-06-01 22:25:59.222976 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP6CZ85HVM+Y6S3yAVlY3NpVIxIqRBCcC3z6HTgiIKmWkxgIcxGqVOFrgH3XE2KColivOCY4dHdOnQFU3D+EKlA=) 2025-06-01 22:25:59.223785 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFza6csJdU6n08NXRJVbcfRxXiwGNYTtdT6UPQP7MxRH) 2025-06-01 22:25:59.224620 | orchestrator | 2025-06-01 22:25:59.225099 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:25:59.226013 | orchestrator | Sunday 01 June 2025 22:25:59 +0000 (0:00:01.067) 0:00:10.893 *********** 2025-06-01 22:26:00.332470 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgr48raHDH3j3xeHJEyZcK0BhEtLFKm3wtURA/PQfKdcKp2oxohAsIEMEi3wGuPhb/BDOjxbU15EfmacSu1I8w79aOmz2Uq+nuQzIM3re323btnyH/EDCKDHtjtEVNiqLRf9hD2qEhJNpNvmSAv1fConATm5mwfad8TeXvmaYKmYfS5p4pGeXLUTymfwl9KZFEr1UuF5kMHGPSUjL/gzryGbkRZTzcN/dItQuYpMZ1BGKLBaVkBPcvmbm49/5XqNKW3riJ0+ViKwFwZD2cipZ/wVCy1dg237ZIMZFE2tXClsMp6tfpGPCkplFBUc0HBpYqV6rc4dpBKI4q5Gg4s9nA67L8HoBHwNBMI8vZM1sQM9UrE7SNYw4Gf3vdLgIFXJaG04tsK4YwsIUmGNC6TMAVhvT9psRksvtAGh38EN5eQXrMlb4jT9WL1ulhDHXdg1a2MY0uMpmkmQKxGk8DONEcs/H+pU7PE4nFVKnHthlgsdqcuafmyZ4k5YgbrSqSgxE=) 2025-06-01 22:26:00.333690 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNYxM2+OLFdhaUdoNfOl4oxLLs2or2IRz05KlFjuzAPAHQEUpDz8ZP9O0l0cvcw/05ZuTmSZtoTxyCvG+NK6LjI=) 2025-06-01 22:26:00.333717 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMnWfn8gEAnqOqcUQGZ/heWpr7o9a8m9gLSlHe1qOcZv) 2025-06-01 22:26:00.333727 | orchestrator | 2025-06-01 22:26:00.334105 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:26:00.334438 | orchestrator | Sunday 01 June 2025 22:26:00 +0000 (0:00:01.111) 0:00:12.005 *********** 2025-06-01 22:26:01.387533 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa6vLHMTyhBf20rqsmw/f6PWpytOaGDhP4IJ2bhF9Uf3Qr44I50yWNT7jSD5tYTsP8pFe35v6XsriWEyi8mpQsPbUB1yZm5b8LsGuy8VBTbmILLhK+dZeiksFt0S6er8M7GEop3SPcFi99M7UbnAlIh2joMhd0AC0jbkfKnZNjzBvUHrJBEPf5j8z2q+lCbf7NvsJ5KuW5cjlO+xCt+uY5NP67jfIyZbGfx/wEqMOXeMxVsKul5MzKOYe/wq+f2hIOWWCHUdde8OaQF1xVpceofVJd9SeeYPDR9BA+SeJ0YqQJwgvxvCBDW1XayPoaId0eGF6TNdoj2fzwpuC2Y3rP29br4RcEf8roG2irGQ+AHdHwfvFeWCalMP0qs6MafenMOVj1WeDPRkki65SW271OOzCt3ve2LNnnxkeM5Z4//qRiULcvB6HG7DPpHHhP0+xrvcT2eoVNX7pH33wfQkTmIwU3K+LgYUl1qHSdOmy9NP0tyC0Zpd+ffuR6AEx0mmM=) 2025-06-01 22:26:01.389318 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGiVL1gag4w5nG1tgv82/9A/VYqxh1d16MbUDRBVqZ+onKkJDXXodZeovqyBTpHZq8bjNwnhf7y1MXOEcVPsr90=) 2025-06-01 22:26:01.390164 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINnJqwjYFetqw33tY5pbpoUg16tQ0ADdcCA8XqKto7h8) 2025-06-01 22:26:01.390891 | orchestrator | 2025-06-01 22:26:01.392071 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:26:01.392659 | orchestrator | Sunday 01 June 2025 22:26:01 +0000 (0:00:01.055) 0:00:13.061 *********** 2025-06-01 22:26:02.491141 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG1zhTmcOBS9p5PeRSSb5nuFZTGMr94cWjJBeF7w9ZFpPUmwo1Ynj1T0+oXxOJbW5lTiUDTrmBPYbivlt9hHHcA=) 2025-06-01 22:26:02.491243 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIBEbRO8uItikg2s3117EON8+rGuM0YfYhOUuXHINzbQ7w+lKth5aQMdIzW/RlUQtTjlSXN8h1j5mTo1fFMPYpgdEDEeboo4TRDP6y6987HM/5TuI0RJ8uX11byg2oJyhCWpVimN3iqEl/41o3CwToiXslRnBwUb7bSj2xsNsRsHuXp+plnVorLr0Tje644ztvHhZ9t69Ug2CqHZJYnt8NRo8gEHzmRtK7XcBFLD0ATNs6zcZMMfCVryarA8ywV2XFKsEfxzi6dretD4leqcOAD8qOg7biDwkjMHeQfs84imFCAJCsVzyRamPVFaZtD+SYo7tN28rSOAvZHkBaJ3pTfLS3ewaxQZ4nPpisDudx9AOpG2ihjhlq3Kr7as+jpIHFeYT90pR2+1aZl28NemWppSSHcJNUF3Mf4CZzRxWGo9yrtHIslm+axZfu31oAIiE59jOdrO2l4gP9OjnDO4tdmSy7S8CqCHp1eni69hVSAcszg/hGvfmof8143Kk8N38=) 2025-06-01 22:26:02.491289 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0szJJwxz9OcKHEcYGIT5VOdpGUJ0y8HxbyXuIRu3FO) 2025-06-01 22:26:02.491303 | orchestrator | 2025-06-01 22:26:02.491519 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-01 22:26:02.491973 | orchestrator | Sunday 01 June 2025 22:26:02 +0000 (0:00:01.103) 0:00:14.164 *********** 2025-06-01 22:26:07.794435 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-01 22:26:07.796457 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-01 22:26:07.798534 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-01 22:26:07.800024 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-01 22:26:07.801128 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-01 22:26:07.802084 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-01 22:26:07.803041 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-01 22:26:07.803996 | orchestrator | 2025-06-01 22:26:07.804507 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-01 22:26:07.805312 | orchestrator | Sunday 01 June 2025 22:26:07 +0000 (0:00:05.301) 0:00:19.466 *********** 2025-06-01 22:26:07.958549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-01 22:26:07.959029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-01 22:26:07.959596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-01 22:26:07.960978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-01 22:26:07.962191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-01 22:26:07.962871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-01 22:26:07.963319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-01 22:26:07.964006 | orchestrator | 2025-06-01 22:26:07.964550 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:26:07.965062 | orchestrator | Sunday 01 June 2025 22:26:07 +0000 (0:00:00.166) 0:00:19.633 *********** 2025-06-01 22:26:09.072622 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHpymBx2nj3a/+60KDqrMBXcIZgCmWPE2HudUPNHNinQ) 2025-06-01 22:26:09.073075 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWX8Jp5XHCDct4CxWcMIPpADmB/X+g1oRNdWi9n+DS7JcceorpHQXdoLfXrJPdaa/HYpir2wPYKHmG8WTQVowaun9bu5AGVeMRs+aoR4bI36Up3/b8PSC6UIwHDMrzzx2V24YYNFFGIk3tBiWhA8GTy6AVdtFp9hkqiI79Hw+4NIuCGiFWw3tp8UpUBRkFdhPxUe2pfsbUYgPPuPfMbmNfH29VdIYVxR+lQYZNt4l3mtiKq1B/Se0WoSpm+aGLFUhElEQ3SIWClEpEAcbUKehUP9w/zLjz3FAq6w+C5hwbQ0P5X6farMJx8M067Ak+AXRzOycrkmSW3X+wDVnS/UXC3aGaLMx5NU4DkMq97MtmYwUxy6VevchamF3ICJgmiEKuvTB7zWZBSQL9uMou/GEAUukptGgM6Kzqm7CL4xKteBnUDH0ZwI0PNQt9mCo35kUMbXAIJZx/fdyuSwwDhp0Kuoyx7aXbkVWh1OUfVjpgqWS/0oC/DlpOl6cqDrGCJx0=) 2025-06-01 22:26:09.073947 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDV3vNg6fN9h5ltjyBkAvz5Pxo/t7XCnA7tvYdEhwUOddIyVDzZOC1lSifsp9lsxpGv44o8jmKASuUvUOdT1KvE=) 2025-06-01 22:26:09.075004 | orchestrator | 2025-06-01 22:26:09.076053 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:26:09.077293 | orchestrator | Sunday 01 June 2025 22:26:09 +0000 (0:00:01.112) 0:00:20.745 *********** 2025-06-01 22:26:10.212039 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFOTrO8R5uirhr64kAgpflKXW1RWo/rNI74UKXBKCIHFFr67Q2i7qrqMpxFkKEhN2HBft6cUHb6UUJl5B/wMZIotixjddStfgTPAd3PrZ5PjH1S11Y7aoI/IHnfSEBZJHHY+nNkIWR9Su/WlkekAuNxz0ehUCovpynbrFjk+D6QNWWiRlwK7mKA1mLPi8Lkg/zdTextJk+YRmyNkbOb/eXsKkHadUzOZQZ5NBZXP0vg27oVrqmFNpdoYbL3M0/UFYT1rcTpMnV0vD4yi2dTr9TthDEmbLBljitEcYmRGLd+rmkHs9v6PV7+eVpwJj9bGr76iutr6OPD/S+fYGfqTx4wfxaB08ZLV10HgVAJxIOjFjjyPbNOijhkmcvj1IeuVmYowSQDrvtep/NzA4m0en1puLfcMDJUCT8IHtrkuYcDm5ZpY2SJw0KPrynhLuxoyk5MrSqJP+nkt6Uz52HMbKj8++o0bXjH2zlAQiB/C2RMpuYBk7hgxDdZIL+nXqPkB8=) 2025-06-01 22:26:10.213196 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKs3wYRz8MnN/TCHSIMDh1CxuZigEvk7xKLu27LMEU9p5InCvrI2iatfPLQyWsehgfKIH5Jbt8ynn0hx3aWm1g0=) 2025-06-01 22:26:10.214225 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFF+13UjM7qMCwH4Js6CLRYoAv3w321U+ODG/ra/q97t) 2025-06-01 22:26:10.215352 | orchestrator | 2025-06-01 22:26:10.216467 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:26:10.217446 | orchestrator | Sunday 01 June 2025 22:26:10 +0000 (0:00:01.139) 0:00:21.885 *********** 2025-06-01 22:26:11.314601 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDM5EvLHbokBpqU9LKnupIo9rSHd9iVOz+Wnwx8rSajWXmoZJtxbsDxOdqdWl+QAZoJElHq3ddTEn9HQ4jdxED2xOgUFLUy4NBGfl9+s5GA2mW7V/Yh7aajk47fXQ1n8mPMIWLbdzC08bp2mUDqsYiVajR0Ean8rw2XHg8SUxieoIMD3rR54dSDKheG5V/w6M5MO5BH5zAqo+4aP/tnX6Al6XW3f0QyZB3Vz/rjIKySR2m7ldCqZRHvjYEM5ujhNYiacRfBP3VeTjsfxNPpfilkBLwlGKKLEu7pWEBVEOM6AV3NX+39eaG0t8IpVY5RQdzVKxh8PdeNSAs2f2hYrLHGxxgdhHtDl4CthKyNHwfCcHk87ZY5Zf8bCXwsvBqJLL0b4+iq6E96XFR5jZaRtkHIMoMR5DDjO/fKpKyDSfvnxJsqjGLWQjs7iTxuMR3VbwO99B1Kuv87vv1gcxoHsXC1KRRqsiinH2aZ+2uv6/RYDqE+YHG+NjiKcFppRHF6sqM=) 2025-06-01 22:26:11.314759 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJR7AEkI+PQ1631HqTxnAK+zdFpAMOxyg2itFPU0HCDNd2dHuPP4yj6hhELegBG0yPU2LduIGc16IrVbHD3pGiQ=) 2025-06-01 22:26:11.315817 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHIvfi6ZQI6tw0KLmXO9SnBxehbDeww/QdgwZ9JYPDpa) 2025-06-01 22:26:11.316564 | orchestrator | 2025-06-01 22:26:11.317336 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:26:11.317832 | orchestrator | Sunday 01 June 2025 22:26:11 +0000 (0:00:01.102) 0:00:22.987 *********** 2025-06-01 22:26:12.386925 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMzRmJ/3qvW04l8yZj2v9GhN/xBL3p7klIQjUsgmWfJ4d6/2JCJCgm2bSYJ/GWaVmnf9g5ODUVbEbW7vkPnhra9ZFxUlv3f1hK+2sv3b2cmKPc4QSgSOkyj4Wd6bvZJxhJA5YztpCaR+y3+9Tn1ZSSw1XTDkEpE8eXKQmzh3n2NhDQolRgK7H9dBKQITMbiRiez3SWah5U/L0v8akClsFsO9wlzV7V+TPEOU8RpRX9mmRWB3JLWr/el2hfn+PcmzPke4x4d7qBbEJdBJfQTTUByPj2Li9PlPpIZceDa2pQiQHIQ2OXqKEeEkUkoorMg8lToYllq+qHhvaOPpCfH/XU5EeXAUYpXur80QqA01y9B5ghzXOH/hRVh7Q8y2hjvxo9oqD+gn+++dedE9uum8Im0tL4rxm7sdGE1h+lLY7ORy9K6XVqQ1O8JpUX1D0NbS4d1agpiYDMuUgW0OmswTXyKZ6cUdCBnifidTsXNXkIrjC5deeUdaUNJhnDXoQMqgM=) 2025-06-01 22:26:12.388166 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP6CZ85HVM+Y6S3yAVlY3NpVIxIqRBCcC3z6HTgiIKmWkxgIcxGqVOFrgH3XE2KColivOCY4dHdOnQFU3D+EKlA=) 2025-06-01 22:26:12.388905 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFza6csJdU6n08NXRJVbcfRxXiwGNYTtdT6UPQP7MxRH) 2025-06-01 22:26:12.390509 | orchestrator | 2025-06-01 22:26:12.391891 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:26:12.392884 | orchestrator | Sunday 01 June 2025 22:26:12 +0000 (0:00:01.072) 0:00:24.060 *********** 2025-06-01 22:26:13.454930 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMnWfn8gEAnqOqcUQGZ/heWpr7o9a8m9gLSlHe1qOcZv) 2025-06-01 22:26:13.455708 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgr48raHDH3j3xeHJEyZcK0BhEtLFKm3wtURA/PQfKdcKp2oxohAsIEMEi3wGuPhb/BDOjxbU15EfmacSu1I8w79aOmz2Uq+nuQzIM3re323btnyH/EDCKDHtjtEVNiqLRf9hD2qEhJNpNvmSAv1fConATm5mwfad8TeXvmaYKmYfS5p4pGeXLUTymfwl9KZFEr1UuF5kMHGPSUjL/gzryGbkRZTzcN/dItQuYpMZ1BGKLBaVkBPcvmbm49/5XqNKW3riJ0+ViKwFwZD2cipZ/wVCy1dg237ZIMZFE2tXClsMp6tfpGPCkplFBUc0HBpYqV6rc4dpBKI4q5Gg4s9nA67L8HoBHwNBMI8vZM1sQM9UrE7SNYw4Gf3vdLgIFXJaG04tsK4YwsIUmGNC6TMAVhvT9psRksvtAGh38EN5eQXrMlb4jT9WL1ulhDHXdg1a2MY0uMpmkmQKxGk8DONEcs/H+pU7PE4nFVKnHthlgsdqcuafmyZ4k5YgbrSqSgxE=) 2025-06-01 22:26:13.457036 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNYxM2+OLFdhaUdoNfOl4oxLLs2or2IRz05KlFjuzAPAHQEUpDz8ZP9O0l0cvcw/05ZuTmSZtoTxyCvG+NK6LjI=) 2025-06-01 22:26:13.458279 | orchestrator | 2025-06-01 22:26:13.458917 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:26:13.459337 | orchestrator | Sunday 01 June 2025 22:26:13 +0000 (0:00:01.066) 0:00:25.126 *********** 2025-06-01 22:26:14.512600 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa6vLHMTyhBf20rqsmw/f6PWpytOaGDhP4IJ2bhF9Uf3Qr44I50yWNT7jSD5tYTsP8pFe35v6XsriWEyi8mpQsPbUB1yZm5b8LsGuy8VBTbmILLhK+dZeiksFt0S6er8M7GEop3SPcFi99M7UbnAlIh2joMhd0AC0jbkfKnZNjzBvUHrJBEPf5j8z2q+lCbf7NvsJ5KuW5cjlO+xCt+uY5NP67jfIyZbGfx/wEqMOXeMxVsKul5MzKOYe/wq+f2hIOWWCHUdde8OaQF1xVpceofVJd9SeeYPDR9BA+SeJ0YqQJwgvxvCBDW1XayPoaId0eGF6TNdoj2fzwpuC2Y3rP29br4RcEf8roG2irGQ+AHdHwfvFeWCalMP0qs6MafenMOVj1WeDPRkki65SW271OOzCt3ve2LNnnxkeM5Z4//qRiULcvB6HG7DPpHHhP0+xrvcT2eoVNX7pH33wfQkTmIwU3K+LgYUl1qHSdOmy9NP0tyC0Zpd+ffuR6AEx0mmM=) 2025-06-01 22:26:14.512732 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGiVL1gag4w5nG1tgv82/9A/VYqxh1d16MbUDRBVqZ+onKkJDXXodZeovqyBTpHZq8bjNwnhf7y1MXOEcVPsr90=) 2025-06-01 22:26:14.513618 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINnJqwjYFetqw33tY5pbpoUg16tQ0ADdcCA8XqKto7h8) 2025-06-01 22:26:14.516336 | orchestrator | 2025-06-01 22:26:14.516833 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:26:14.517606 | orchestrator | Sunday 01 June 2025 22:26:14 +0000 (0:00:01.054) 0:00:26.181 *********** 2025-06-01 22:26:15.613253 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0szJJwxz9OcKHEcYGIT5VOdpGUJ0y8HxbyXuIRu3FO) 2025-06-01 22:26:15.614853 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIBEbRO8uItikg2s3117EON8+rGuM0YfYhOUuXHINzbQ7w+lKth5aQMdIzW/RlUQtTjlSXN8h1j5mTo1fFMPYpgdEDEeboo4TRDP6y6987HM/5TuI0RJ8uX11byg2oJyhCWpVimN3iqEl/41o3CwToiXslRnBwUb7bSj2xsNsRsHuXp+plnVorLr0Tje644ztvHhZ9t69Ug2CqHZJYnt8NRo8gEHzmRtK7XcBFLD0ATNs6zcZMMfCVryarA8ywV2XFKsEfxzi6dretD4leqcOAD8qOg7biDwkjMHeQfs84imFCAJCsVzyRamPVFaZtD+SYo7tN28rSOAvZHkBaJ3pTfLS3ewaxQZ4nPpisDudx9AOpG2ihjhlq3Kr7as+jpIHFeYT90pR2+1aZl28NemWppSSHcJNUF3Mf4CZzRxWGo9yrtHIslm+axZfu31oAIiE59jOdrO2l4gP9OjnDO4tdmSy7S8CqCHp1eni69hVSAcszg/hGvfmof8143Kk8N38=) 2025-06-01 22:26:15.614965 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG1zhTmcOBS9p5PeRSSb5nuFZTGMr94cWjJBeF7w9ZFpPUmwo1Ynj1T0+oXxOJbW5lTiUDTrmBPYbivlt9hHHcA=) 2025-06-01 22:26:15.615715 | orchestrator | 2025-06-01 22:26:15.616696 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-01 22:26:15.617870 | orchestrator | Sunday 01 June 2025 22:26:15 +0000 (0:00:01.104) 0:00:27.286 *********** 2025-06-01 22:26:15.780061 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-01 22:26:15.780506 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-01 22:26:15.780954 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-01 22:26:15.783276 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-01 22:26:15.784301 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-01 22:26:15.785133 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-01 22:26:15.785943 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-01 22:26:15.786816 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:26:15.787039 | orchestrator | 2025-06-01 22:26:15.787544 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-01 22:26:15.788004 | orchestrator | Sunday 01 June 2025 22:26:15 +0000 (0:00:00.168) 0:00:27.455 *********** 2025-06-01 22:26:15.838254 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:26:15.838800 | orchestrator | 2025-06-01 22:26:15.839771 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-01 22:26:15.840759 | orchestrator | Sunday 01 June 2025 22:26:15 +0000 (0:00:00.058) 0:00:27.513 *********** 2025-06-01 22:26:15.888355 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:26:15.888625 | orchestrator | 2025-06-01 22:26:15.888924 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-01 22:26:15.888950 | orchestrator | Sunday 01 June 2025 22:26:15 +0000 (0:00:00.050) 0:00:27.563 *********** 2025-06-01 22:26:16.540904 | orchestrator | changed: [testbed-manager] 2025-06-01 22:26:16.541530 | orchestrator | 2025-06-01 22:26:16.542998 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:26:16.543419 | orchestrator | 2025-06-01 22:26:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:26:16.543715 | orchestrator | 2025-06-01 22:26:16 | INFO  | Please wait and do not abort execution. 2025-06-01 22:26:16.544825 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 22:26:16.545589 | orchestrator | 2025-06-01 22:26:16.547828 | orchestrator | 2025-06-01 22:26:16.548931 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:26:16.549904 | orchestrator | Sunday 01 June 2025 22:26:16 +0000 (0:00:00.650) 0:00:28.214 *********** 2025-06-01 22:26:16.550930 | orchestrator | =============================================================================== 2025-06-01 22:26:16.551782 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.06s 2025-06-01 22:26:16.552672 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.30s 2025-06-01 22:26:16.553496 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2025-06-01 22:26:16.554590 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-06-01 22:26:16.555387 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-06-01 22:26:16.555824 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-06-01 22:26:16.556781 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-06-01 22:26:16.557409 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-06-01 22:26:16.558249 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-06-01 22:26:16.559035 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-06-01 22:26:16.559478 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-06-01 22:26:16.560517 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-01 22:26:16.562177 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-01 22:26:16.562620 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-01 22:26:16.563092 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-01 22:26:16.563789 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-01 22:26:16.563949 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.65s 2025-06-01 22:26:16.564747 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-06-01 22:26:16.565110 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-06-01 22:26:16.565514 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-06-01 22:26:17.072266 | orchestrator | + osism apply squid 2025-06-01 22:26:18.751696 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:26:18.751785 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:26:18.751801 | orchestrator | Registering Redlock._release_script 2025-06-01 22:26:18.810810 | orchestrator | 2025-06-01 22:26:18 | INFO  | Task d2d2cde0-5cc1-4ef6-997f-06050f7e33e8 (squid) was prepared for execution. 2025-06-01 22:26:18.810896 | orchestrator | 2025-06-01 22:26:18 | INFO  | It takes a moment until task d2d2cde0-5cc1-4ef6-997f-06050f7e33e8 (squid) has been started and output is visible here. 2025-06-01 22:26:22.892731 | orchestrator | 2025-06-01 22:26:22.895719 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-01 22:26:22.895757 | orchestrator | 2025-06-01 22:26:22.898101 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-01 22:26:22.898662 | orchestrator | Sunday 01 June 2025 22:26:22 +0000 (0:00:00.180) 0:00:00.180 *********** 2025-06-01 22:26:22.987293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 22:26:22.987790 | orchestrator | 2025-06-01 22:26:22.989567 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-01 22:26:22.990817 | orchestrator | Sunday 01 June 2025 22:26:22 +0000 (0:00:00.097) 0:00:00.278 *********** 2025-06-01 22:26:24.416625 | orchestrator | ok: [testbed-manager] 2025-06-01 22:26:24.417138 | orchestrator | 2025-06-01 22:26:24.417906 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-01 22:26:24.418895 | orchestrator | Sunday 01 June 2025 22:26:24 +0000 (0:00:01.427) 0:00:01.706 *********** 2025-06-01 22:26:25.656050 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-01 22:26:25.656894 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-01 22:26:25.656988 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-01 22:26:25.658372 | orchestrator | 2025-06-01 22:26:25.658659 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-01 22:26:25.659682 | orchestrator | Sunday 01 June 2025 22:26:25 +0000 (0:00:01.239) 0:00:02.945 *********** 2025-06-01 22:26:26.755187 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-01 22:26:26.756643 | orchestrator | 2025-06-01 22:26:26.756745 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-01 22:26:26.757234 | orchestrator | Sunday 01 June 2025 22:26:26 +0000 (0:00:01.100) 0:00:04.045 *********** 2025-06-01 22:26:27.120770 | orchestrator | ok: [testbed-manager] 2025-06-01 22:26:27.122269 | orchestrator | 2025-06-01 22:26:27.123171 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-01 22:26:27.123796 | orchestrator | Sunday 01 June 2025 22:26:27 +0000 (0:00:00.365) 0:00:04.410 *********** 2025-06-01 22:26:28.088517 | orchestrator | changed: [testbed-manager] 2025-06-01 22:26:28.090181 | orchestrator | 2025-06-01 22:26:28.091020 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-01 22:26:28.091876 | orchestrator | Sunday 01 June 2025 22:26:28 +0000 (0:00:00.967) 0:00:05.378 *********** 2025-06-01 22:26:59.835910 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-01 22:26:59.838707 | orchestrator | ok: [testbed-manager] 2025-06-01 22:26:59.839706 | orchestrator | 2025-06-01 22:26:59.840163 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-01 22:26:59.840869 | orchestrator | Sunday 01 June 2025 22:26:59 +0000 (0:00:31.744) 0:00:37.123 *********** 2025-06-01 22:27:12.342258 | orchestrator | changed: [testbed-manager] 2025-06-01 22:27:12.342383 | orchestrator | 2025-06-01 22:27:12.342536 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-01 22:27:12.343786 | orchestrator | Sunday 01 June 2025 22:27:12 +0000 (0:00:12.505) 0:00:49.628 *********** 2025-06-01 22:28:12.424406 | orchestrator | Pausing for 60 seconds 2025-06-01 22:28:12.424525 | orchestrator | changed: [testbed-manager] 2025-06-01 22:28:12.425237 | orchestrator | 2025-06-01 22:28:12.425998 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-01 22:28:12.427528 | orchestrator | Sunday 01 June 2025 22:28:12 +0000 (0:01:00.082) 0:01:49.711 *********** 2025-06-01 22:28:12.494303 | orchestrator | ok: [testbed-manager] 2025-06-01 22:28:12.494718 | orchestrator | 2025-06-01 22:28:12.495785 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-01 22:28:12.496370 | orchestrator | Sunday 01 June 2025 22:28:12 +0000 (0:00:00.074) 0:01:49.785 *********** 2025-06-01 22:28:13.140243 | orchestrator | changed: [testbed-manager] 2025-06-01 22:28:13.141099 | orchestrator | 2025-06-01 22:28:13.142226 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:28:13.142638 | orchestrator | 2025-06-01 22:28:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:28:13.142760 | orchestrator | 2025-06-01 22:28:13 | INFO  | Please wait and do not abort execution. 2025-06-01 22:28:13.144449 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:28:13.145146 | orchestrator | 2025-06-01 22:28:13.146117 | orchestrator | 2025-06-01 22:28:13.146933 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:28:13.147656 | orchestrator | Sunday 01 June 2025 22:28:13 +0000 (0:00:00.645) 0:01:50.431 *********** 2025-06-01 22:28:13.148472 | orchestrator | =============================================================================== 2025-06-01 22:28:13.149254 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-06-01 22:28:13.150168 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.74s 2025-06-01 22:28:13.151270 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.51s 2025-06-01 22:28:13.152713 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.43s 2025-06-01 22:28:13.153532 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.24s 2025-06-01 22:28:13.153807 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.10s 2025-06-01 22:28:13.155135 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.97s 2025-06-01 22:28:13.156030 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2025-06-01 22:28:13.157159 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-06-01 22:28:13.157355 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-06-01 22:28:13.159249 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-06-01 22:28:13.632728 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-01 22:28:13.632849 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-06-01 22:28:13.638100 | orchestrator | ++ semver 9.1.0 9.0.0 2025-06-01 22:28:13.711640 | orchestrator | + [[ 1 -lt 0 ]] 2025-06-01 22:28:13.712748 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-01 22:28:15.431110 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:28:15.431209 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:28:15.431222 | orchestrator | Registering Redlock._release_script 2025-06-01 22:28:15.490528 | orchestrator | 2025-06-01 22:28:15 | INFO  | Task f232eb99-ddb4-4d37-9f59-936fb60616ab (operator) was prepared for execution. 2025-06-01 22:28:15.490633 | orchestrator | 2025-06-01 22:28:15 | INFO  | It takes a moment until task f232eb99-ddb4-4d37-9f59-936fb60616ab (operator) has been started and output is visible here. 2025-06-01 22:28:19.540670 | orchestrator | 2025-06-01 22:28:19.542645 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-01 22:28:19.543590 | orchestrator | 2025-06-01 22:28:19.543754 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:28:19.544470 | orchestrator | Sunday 01 June 2025 22:28:19 +0000 (0:00:00.157) 0:00:00.157 *********** 2025-06-01 22:28:23.033631 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:23.035284 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:23.038308 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:23.038355 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:23.040784 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:23.040833 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:23.042000 | orchestrator | 2025-06-01 22:28:23.043259 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-01 22:28:23.044103 | orchestrator | Sunday 01 June 2025 22:28:23 +0000 (0:00:03.494) 0:00:03.651 *********** 2025-06-01 22:28:23.827901 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:23.832756 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:23.834963 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:23.835141 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:23.836710 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:23.836806 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:23.839273 | orchestrator | 2025-06-01 22:28:23.841901 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-01 22:28:23.841947 | orchestrator | 2025-06-01 22:28:23.841970 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-01 22:28:23.841991 | orchestrator | Sunday 01 June 2025 22:28:23 +0000 (0:00:00.792) 0:00:04.443 *********** 2025-06-01 22:28:23.903164 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:23.926444 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:23.945851 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:23.993686 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:23.994384 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:23.994984 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:23.995584 | orchestrator | 2025-06-01 22:28:23.995939 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-01 22:28:23.996549 | orchestrator | Sunday 01 June 2025 22:28:23 +0000 (0:00:00.169) 0:00:04.612 *********** 2025-06-01 22:28:24.063773 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:24.090272 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:24.111178 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:24.160940 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:24.162420 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:24.163988 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:24.165469 | orchestrator | 2025-06-01 22:28:24.166507 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-01 22:28:24.167433 | orchestrator | Sunday 01 June 2025 22:28:24 +0000 (0:00:00.166) 0:00:04.778 *********** 2025-06-01 22:28:24.757991 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:28:24.758276 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:28:24.758798 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:24.760068 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:28:24.760468 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:24.760959 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:24.761265 | orchestrator | 2025-06-01 22:28:24.761768 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-01 22:28:24.762119 | orchestrator | Sunday 01 June 2025 22:28:24 +0000 (0:00:00.598) 0:00:05.377 *********** 2025-06-01 22:28:25.576899 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:28:25.577476 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:28:25.578326 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:25.579749 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:25.580159 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:28:25.580791 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:25.581886 | orchestrator | 2025-06-01 22:28:25.582401 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-01 22:28:25.586262 | orchestrator | Sunday 01 June 2025 22:28:25 +0000 (0:00:00.817) 0:00:06.195 *********** 2025-06-01 22:28:26.780575 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-01 22:28:26.783919 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-01 22:28:26.784252 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-01 22:28:26.786129 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-01 22:28:26.787310 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-01 22:28:26.787927 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-01 22:28:26.789057 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-01 22:28:26.792933 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-01 22:28:26.792960 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-01 22:28:26.792972 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-01 22:28:26.792983 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-01 22:28:26.792994 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-01 22:28:26.793005 | orchestrator | 2025-06-01 22:28:26.793018 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-01 22:28:26.793031 | orchestrator | Sunday 01 June 2025 22:28:26 +0000 (0:00:01.202) 0:00:07.398 *********** 2025-06-01 22:28:28.081091 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:28:28.081987 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:28:28.082091 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:28:28.085956 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:28.086091 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:28.086108 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:28.086121 | orchestrator | 2025-06-01 22:28:28.086697 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-01 22:28:28.087716 | orchestrator | Sunday 01 June 2025 22:28:28 +0000 (0:00:01.299) 0:00:08.697 *********** 2025-06-01 22:28:29.333440 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-01 22:28:29.334234 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-01 22:28:29.335831 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-01 22:28:29.444030 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:28:29.445353 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:28:29.448952 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:28:29.449000 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:28:29.449015 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:28:29.449235 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:28:29.450331 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-01 22:28:29.451240 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-01 22:28:29.452171 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-01 22:28:29.453405 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-01 22:28:29.458814 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-01 22:28:29.458913 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-01 22:28:29.458939 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:28:29.458957 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:28:29.458976 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:28:29.458995 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:28:29.459015 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:28:29.459034 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:28:29.459053 | orchestrator | 2025-06-01 22:28:29.459222 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-01 22:28:29.459707 | orchestrator | Sunday 01 June 2025 22:28:29 +0000 (0:00:01.364) 0:00:10.062 *********** 2025-06-01 22:28:30.038443 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:28:30.038899 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:28:30.039409 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:30.040076 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:30.041678 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:30.041702 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:28:30.041710 | orchestrator | 2025-06-01 22:28:30.041719 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-01 22:28:30.041729 | orchestrator | Sunday 01 June 2025 22:28:30 +0000 (0:00:00.594) 0:00:10.657 *********** 2025-06-01 22:28:30.115464 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:28:30.141092 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:28:30.168643 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:28:30.224641 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:28:30.225529 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:28:30.225554 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:28:30.225867 | orchestrator | 2025-06-01 22:28:30.226417 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-01 22:28:30.226911 | orchestrator | Sunday 01 June 2025 22:28:30 +0000 (0:00:00.187) 0:00:10.844 *********** 2025-06-01 22:28:30.957442 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-01 22:28:30.960485 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 22:28:30.960702 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-01 22:28:30.961369 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:28:30.962380 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:28:30.962917 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:28:30.964208 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 22:28:30.964867 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:30.965128 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 22:28:30.969532 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:30.969614 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 22:28:30.969913 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:30.970334 | orchestrator | 2025-06-01 22:28:30.970711 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-01 22:28:30.973821 | orchestrator | Sunday 01 June 2025 22:28:30 +0000 (0:00:00.731) 0:00:11.576 *********** 2025-06-01 22:28:31.023163 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:28:31.041715 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:28:31.066106 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:28:31.128512 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:28:31.129207 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:28:31.129980 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:28:31.131129 | orchestrator | 2025-06-01 22:28:31.132022 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-01 22:28:31.133548 | orchestrator | Sunday 01 June 2025 22:28:31 +0000 (0:00:00.172) 0:00:11.749 *********** 2025-06-01 22:28:31.188147 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:28:31.214102 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:28:31.263712 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:28:31.304680 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:28:31.305666 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:28:31.306919 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:28:31.308020 | orchestrator | 2025-06-01 22:28:31.308780 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-01 22:28:31.310649 | orchestrator | Sunday 01 June 2025 22:28:31 +0000 (0:00:00.175) 0:00:11.924 *********** 2025-06-01 22:28:31.365392 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:28:31.394067 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:28:31.442804 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:28:31.486450 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:28:31.488675 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:28:31.489579 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:28:31.493537 | orchestrator | 2025-06-01 22:28:31.494811 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-01 22:28:31.496177 | orchestrator | Sunday 01 June 2025 22:28:31 +0000 (0:00:00.180) 0:00:12.105 *********** 2025-06-01 22:28:32.199117 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:28:32.200221 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:28:32.203219 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:28:32.204753 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:32.208733 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:32.209818 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:32.211866 | orchestrator | 2025-06-01 22:28:32.213943 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-01 22:28:32.215167 | orchestrator | Sunday 01 June 2025 22:28:32 +0000 (0:00:00.710) 0:00:12.816 *********** 2025-06-01 22:28:32.315466 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:28:32.337496 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:28:32.451564 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:28:32.453048 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:28:32.454091 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:28:32.455145 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:28:32.456368 | orchestrator | 2025-06-01 22:28:32.457478 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:28:32.457960 | orchestrator | 2025-06-01 22:28:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:28:32.458253 | orchestrator | 2025-06-01 22:28:32 | INFO  | Please wait and do not abort execution. 2025-06-01 22:28:32.459443 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:28:32.460397 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:28:32.462113 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:28:32.463153 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:28:32.464093 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:28:32.465124 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:28:32.466709 | orchestrator | 2025-06-01 22:28:32.467950 | orchestrator | 2025-06-01 22:28:32.468585 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:28:32.469455 | orchestrator | Sunday 01 June 2025 22:28:32 +0000 (0:00:00.253) 0:00:13.069 *********** 2025-06-01 22:28:32.470366 | orchestrator | =============================================================================== 2025-06-01 22:28:32.470746 | orchestrator | Gathering Facts --------------------------------------------------------- 3.49s 2025-06-01 22:28:32.471630 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.36s 2025-06-01 22:28:32.472174 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.30s 2025-06-01 22:28:32.472964 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.20s 2025-06-01 22:28:32.473692 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.82s 2025-06-01 22:28:32.474312 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2025-06-01 22:28:32.475031 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2025-06-01 22:28:32.475672 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.71s 2025-06-01 22:28:32.476547 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-06-01 22:28:32.477144 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-06-01 22:28:32.477753 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2025-06-01 22:28:32.478309 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-06-01 22:28:32.478887 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-06-01 22:28:32.479475 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2025-06-01 22:28:32.480108 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-06-01 22:28:32.480595 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-06-01 22:28:32.481104 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-06-01 22:28:33.000919 | orchestrator | + osism apply --environment custom facts 2025-06-01 22:28:34.740416 | orchestrator | 2025-06-01 22:28:34 | INFO  | Trying to run play facts in environment custom 2025-06-01 22:28:34.745048 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:28:34.745112 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:28:34.745126 | orchestrator | Registering Redlock._release_script 2025-06-01 22:28:34.803314 | orchestrator | 2025-06-01 22:28:34 | INFO  | Task c773a31e-e8a4-4903-9e46-6df795529995 (facts) was prepared for execution. 2025-06-01 22:28:34.803395 | orchestrator | 2025-06-01 22:28:34 | INFO  | It takes a moment until task c773a31e-e8a4-4903-9e46-6df795529995 (facts) has been started and output is visible here. 2025-06-01 22:28:38.857998 | orchestrator | 2025-06-01 22:28:38.858174 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-01 22:28:38.860286 | orchestrator | 2025-06-01 22:28:38.864375 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-01 22:28:38.864688 | orchestrator | Sunday 01 June 2025 22:28:38 +0000 (0:00:00.088) 0:00:00.088 *********** 2025-06-01 22:28:40.608280 | orchestrator | ok: [testbed-manager] 2025-06-01 22:28:40.608393 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:40.608675 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:28:40.609660 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:28:40.609810 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:28:40.611677 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:40.611715 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:40.611726 | orchestrator | 2025-06-01 22:28:40.611738 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-01 22:28:40.612292 | orchestrator | Sunday 01 June 2025 22:28:40 +0000 (0:00:01.752) 0:00:01.841 *********** 2025-06-01 22:28:41.842547 | orchestrator | ok: [testbed-manager] 2025-06-01 22:28:41.842654 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:28:41.842994 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:28:41.847014 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:28:41.847062 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:41.847075 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:41.847086 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:41.847098 | orchestrator | 2025-06-01 22:28:41.847157 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-01 22:28:41.847912 | orchestrator | 2025-06-01 22:28:41.848440 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-01 22:28:41.849186 | orchestrator | Sunday 01 June 2025 22:28:41 +0000 (0:00:01.234) 0:00:03.075 *********** 2025-06-01 22:28:41.982324 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:41.982574 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:41.982832 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:41.983299 | orchestrator | 2025-06-01 22:28:41.983784 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-01 22:28:41.986800 | orchestrator | Sunday 01 June 2025 22:28:41 +0000 (0:00:00.139) 0:00:03.215 *********** 2025-06-01 22:28:42.184971 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:42.185715 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:42.186905 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:42.187605 | orchestrator | 2025-06-01 22:28:42.190593 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-01 22:28:42.190620 | orchestrator | Sunday 01 June 2025 22:28:42 +0000 (0:00:00.204) 0:00:03.420 *********** 2025-06-01 22:28:42.377326 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:42.377842 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:42.378570 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:42.382190 | orchestrator | 2025-06-01 22:28:42.382271 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-01 22:28:42.383121 | orchestrator | Sunday 01 June 2025 22:28:42 +0000 (0:00:00.190) 0:00:03.611 *********** 2025-06-01 22:28:42.575778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:28:42.576046 | orchestrator | 2025-06-01 22:28:42.576349 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-01 22:28:42.576904 | orchestrator | Sunday 01 June 2025 22:28:42 +0000 (0:00:00.199) 0:00:03.810 *********** 2025-06-01 22:28:43.011323 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:43.017073 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:43.017190 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:43.017208 | orchestrator | 2025-06-01 22:28:43.017577 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-01 22:28:43.020557 | orchestrator | Sunday 01 June 2025 22:28:43 +0000 (0:00:00.431) 0:00:04.242 *********** 2025-06-01 22:28:43.149107 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:28:43.149375 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:28:43.150135 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:28:43.150758 | orchestrator | 2025-06-01 22:28:43.151134 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-01 22:28:43.155141 | orchestrator | Sunday 01 June 2025 22:28:43 +0000 (0:00:00.141) 0:00:04.384 *********** 2025-06-01 22:28:44.186351 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:44.186529 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:44.186935 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:44.188575 | orchestrator | 2025-06-01 22:28:44.190475 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-01 22:28:44.190689 | orchestrator | Sunday 01 June 2025 22:28:44 +0000 (0:00:01.034) 0:00:05.418 *********** 2025-06-01 22:28:44.692336 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:44.693178 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:44.694613 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:44.695318 | orchestrator | 2025-06-01 22:28:44.696449 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-01 22:28:44.697203 | orchestrator | Sunday 01 June 2025 22:28:44 +0000 (0:00:00.507) 0:00:05.926 *********** 2025-06-01 22:28:45.822783 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:45.823793 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:45.823999 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:45.824517 | orchestrator | 2025-06-01 22:28:45.825279 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-01 22:28:45.828637 | orchestrator | Sunday 01 June 2025 22:28:45 +0000 (0:00:01.128) 0:00:07.054 *********** 2025-06-01 22:28:58.756892 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:58.757039 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:58.757056 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:58.757069 | orchestrator | 2025-06-01 22:28:58.757843 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-01 22:28:58.758169 | orchestrator | Sunday 01 June 2025 22:28:58 +0000 (0:00:12.933) 0:00:19.988 *********** 2025-06-01 22:28:58.868292 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:28:58.868601 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:28:58.868974 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:28:58.869853 | orchestrator | 2025-06-01 22:28:58.871123 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-01 22:28:58.871415 | orchestrator | Sunday 01 June 2025 22:28:58 +0000 (0:00:00.115) 0:00:20.103 *********** 2025-06-01 22:29:05.973522 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:29:05.973612 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:29:05.973619 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:29:05.974962 | orchestrator | 2025-06-01 22:29:05.975066 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-01 22:29:05.975689 | orchestrator | Sunday 01 June 2025 22:29:05 +0000 (0:00:07.102) 0:00:27.205 *********** 2025-06-01 22:29:06.422166 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:06.423893 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:06.424906 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:06.426112 | orchestrator | 2025-06-01 22:29:06.427152 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-01 22:29:06.428326 | orchestrator | Sunday 01 June 2025 22:29:06 +0000 (0:00:00.450) 0:00:27.656 *********** 2025-06-01 22:29:09.925660 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-01 22:29:09.926204 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-01 22:29:09.926766 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-01 22:29:09.927577 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-01 22:29:09.928640 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-01 22:29:09.929286 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-01 22:29:09.929964 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-01 22:29:09.930420 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-01 22:29:09.933064 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-01 22:29:09.934215 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-01 22:29:09.935023 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-01 22:29:09.935903 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-01 22:29:09.936593 | orchestrator | 2025-06-01 22:29:09.937232 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-01 22:29:09.937588 | orchestrator | Sunday 01 June 2025 22:29:09 +0000 (0:00:03.501) 0:00:31.158 *********** 2025-06-01 22:29:11.160856 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:11.161041 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:11.162703 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:11.163941 | orchestrator | 2025-06-01 22:29:11.165085 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 22:29:11.166524 | orchestrator | 2025-06-01 22:29:11.167745 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 22:29:11.168712 | orchestrator | Sunday 01 June 2025 22:29:11 +0000 (0:00:01.234) 0:00:32.392 *********** 2025-06-01 22:29:14.967089 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:14.967261 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:14.967593 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:14.967964 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:14.968595 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:14.969577 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:14.970267 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:14.970537 | orchestrator | 2025-06-01 22:29:14.971525 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:29:14.972468 | orchestrator | 2025-06-01 22:29:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:29:14.972491 | orchestrator | 2025-06-01 22:29:14 | INFO  | Please wait and do not abort execution. 2025-06-01 22:29:14.972870 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:29:14.973944 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:29:14.974394 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:29:14.975345 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:29:14.976500 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:29:14.977592 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:29:14.977972 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:29:14.978756 | orchestrator | 2025-06-01 22:29:14.979872 | orchestrator | 2025-06-01 22:29:14.981221 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:29:14.982525 | orchestrator | Sunday 01 June 2025 22:29:14 +0000 (0:00:03.807) 0:00:36.200 *********** 2025-06-01 22:29:14.982857 | orchestrator | =============================================================================== 2025-06-01 22:29:14.983882 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.93s 2025-06-01 22:29:14.984775 | orchestrator | Install required packages (Debian) -------------------------------------- 7.10s 2025-06-01 22:29:14.985242 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.81s 2025-06-01 22:29:14.986073 | orchestrator | Copy fact files --------------------------------------------------------- 3.50s 2025-06-01 22:29:14.986462 | orchestrator | Create custom facts directory ------------------------------------------- 1.75s 2025-06-01 22:29:14.987098 | orchestrator | Copy fact file ---------------------------------------------------------- 1.23s 2025-06-01 22:29:14.988484 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.23s 2025-06-01 22:29:14.988879 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.13s 2025-06-01 22:29:14.989327 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-06-01 22:29:14.989849 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.51s 2025-06-01 22:29:14.990259 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-06-01 22:29:14.991026 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-06-01 22:29:14.991542 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-06-01 22:29:14.991975 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.20s 2025-06-01 22:29:14.992481 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-06-01 22:29:14.992822 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2025-06-01 22:29:14.993367 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2025-06-01 22:29:14.993774 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-06-01 22:29:15.525738 | orchestrator | + osism apply bootstrap 2025-06-01 22:29:17.211911 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:29:17.212020 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:29:17.212036 | orchestrator | Registering Redlock._release_script 2025-06-01 22:29:17.282462 | orchestrator | 2025-06-01 22:29:17 | INFO  | Task 0fc8fa6d-2b87-4357-b698-455ad8004e94 (bootstrap) was prepared for execution. 2025-06-01 22:29:17.282547 | orchestrator | 2025-06-01 22:29:17 | INFO  | It takes a moment until task 0fc8fa6d-2b87-4357-b698-455ad8004e94 (bootstrap) has been started and output is visible here. 2025-06-01 22:29:21.459089 | orchestrator | 2025-06-01 22:29:21.459254 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-01 22:29:21.459273 | orchestrator | 2025-06-01 22:29:21.459285 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-01 22:29:21.463478 | orchestrator | Sunday 01 June 2025 22:29:21 +0000 (0:00:00.164) 0:00:00.164 *********** 2025-06-01 22:29:21.537803 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:21.569576 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:21.594209 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:21.623977 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:21.724908 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:21.725349 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:21.725658 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:21.726428 | orchestrator | 2025-06-01 22:29:21.729677 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 22:29:21.729700 | orchestrator | 2025-06-01 22:29:21.729713 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 22:29:21.729726 | orchestrator | Sunday 01 June 2025 22:29:21 +0000 (0:00:00.273) 0:00:00.438 *********** 2025-06-01 22:29:25.310780 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:25.311550 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:25.314210 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:25.315171 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:25.316055 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:25.317772 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:25.318393 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:25.319230 | orchestrator | 2025-06-01 22:29:25.319496 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-01 22:29:25.320183 | orchestrator | 2025-06-01 22:29:25.320632 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 22:29:25.321301 | orchestrator | Sunday 01 June 2025 22:29:25 +0000 (0:00:03.585) 0:00:04.023 *********** 2025-06-01 22:29:25.433382 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-01 22:29:25.433782 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-01 22:29:25.434444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-01 22:29:25.437232 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-01 22:29:25.437454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:29:25.438508 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-01 22:29:25.440617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:29:25.469932 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-01 22:29:25.470210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:29:25.470458 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-01 22:29:25.470798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-01 22:29:25.519621 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-01 22:29:25.519702 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-01 22:29:25.519710 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-01 22:29:25.519717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-01 22:29:25.519723 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-01 22:29:25.519729 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-01 22:29:25.519736 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-01 22:29:25.770265 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-01 22:29:25.770365 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:29:25.771410 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-01 22:29:25.772642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-01 22:29:25.773922 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-01 22:29:25.776487 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:29:25.776509 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-01 22:29:25.776881 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-01 22:29:25.777469 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-01 22:29:25.778299 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-01 22:29:25.779776 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-01 22:29:25.779835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-01 22:29:25.780547 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-01 22:29:25.781010 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-01 22:29:25.781824 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-01 22:29:25.782408 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:29:25.783078 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-01 22:29:25.783772 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-01 22:29:25.784413 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:29:25.785221 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-01 22:29:25.785832 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 22:29:25.786197 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-01 22:29:25.786799 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 22:29:25.787316 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-01 22:29:25.787966 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-01 22:29:25.788342 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-01 22:29:25.788784 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-01 22:29:25.789325 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 22:29:25.789814 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:29:25.790324 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-01 22:29:25.790991 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-01 22:29:25.791399 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-01 22:29:25.791901 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-01 22:29:25.792307 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-01 22:29:25.792644 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:29:25.793077 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-01 22:29:25.793489 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-01 22:29:25.793859 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:29:25.794306 | orchestrator | 2025-06-01 22:29:25.794860 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-01 22:29:25.795173 | orchestrator | 2025-06-01 22:29:25.795530 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-01 22:29:25.795864 | orchestrator | Sunday 01 June 2025 22:29:25 +0000 (0:00:00.453) 0:00:04.477 *********** 2025-06-01 22:29:26.996874 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:26.997572 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:26.998437 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:27.000306 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:27.002262 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:27.002298 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:27.002728 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:27.003653 | orchestrator | 2025-06-01 22:29:27.004329 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-01 22:29:27.005055 | orchestrator | Sunday 01 June 2025 22:29:26 +0000 (0:00:01.232) 0:00:05.709 *********** 2025-06-01 22:29:28.251079 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:28.251840 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:28.253039 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:28.254113 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:28.254600 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:28.255232 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:28.255605 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:28.255895 | orchestrator | 2025-06-01 22:29:28.256302 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-01 22:29:28.256729 | orchestrator | Sunday 01 June 2025 22:29:28 +0000 (0:00:01.252) 0:00:06.962 *********** 2025-06-01 22:29:28.522807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:29:28.523796 | orchestrator | 2025-06-01 22:29:28.523879 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-01 22:29:28.524223 | orchestrator | Sunday 01 June 2025 22:29:28 +0000 (0:00:00.272) 0:00:07.234 *********** 2025-06-01 22:29:30.538207 | orchestrator | changed: [testbed-manager] 2025-06-01 22:29:30.538396 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:29:30.538747 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:29:30.539485 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:29:30.540394 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:29:30.541885 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:29:30.542718 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:29:30.544722 | orchestrator | 2025-06-01 22:29:30.545752 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-01 22:29:30.546093 | orchestrator | Sunday 01 June 2025 22:29:30 +0000 (0:00:02.013) 0:00:09.247 *********** 2025-06-01 22:29:30.623514 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:29:30.823495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:29:30.823592 | orchestrator | 2025-06-01 22:29:30.823854 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-01 22:29:30.825813 | orchestrator | Sunday 01 June 2025 22:29:30 +0000 (0:00:00.288) 0:00:09.536 *********** 2025-06-01 22:29:31.839887 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:29:31.841912 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:29:31.842877 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:29:31.843878 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:29:31.845227 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:29:31.845822 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:29:31.846647 | orchestrator | 2025-06-01 22:29:31.847837 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-01 22:29:31.848742 | orchestrator | Sunday 01 June 2025 22:29:31 +0000 (0:00:01.014) 0:00:10.551 *********** 2025-06-01 22:29:31.924291 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:29:32.378703 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:29:32.378955 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:29:32.379922 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:29:32.380875 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:29:32.380944 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:29:32.382342 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:29:32.382638 | orchestrator | 2025-06-01 22:29:32.384643 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-01 22:29:32.385151 | orchestrator | Sunday 01 June 2025 22:29:32 +0000 (0:00:00.539) 0:00:11.090 *********** 2025-06-01 22:29:32.478827 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:29:32.499060 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:29:32.526589 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:29:32.824397 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:29:32.824783 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:29:32.825445 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:29:32.826385 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:32.826904 | orchestrator | 2025-06-01 22:29:32.827452 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-01 22:29:32.827906 | orchestrator | Sunday 01 June 2025 22:29:32 +0000 (0:00:00.444) 0:00:11.535 *********** 2025-06-01 22:29:32.904723 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:29:32.924788 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:29:32.954734 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:29:32.972700 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:29:33.029489 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:29:33.029684 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:29:33.030104 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:29:33.030722 | orchestrator | 2025-06-01 22:29:33.031574 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-01 22:29:33.032012 | orchestrator | Sunday 01 June 2025 22:29:33 +0000 (0:00:00.207) 0:00:11.743 *********** 2025-06-01 22:29:33.315398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:29:33.315789 | orchestrator | 2025-06-01 22:29:33.316456 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-01 22:29:33.317002 | orchestrator | Sunday 01 June 2025 22:29:33 +0000 (0:00:00.284) 0:00:12.027 *********** 2025-06-01 22:29:33.647094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:29:33.647247 | orchestrator | 2025-06-01 22:29:33.647339 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-01 22:29:33.647754 | orchestrator | Sunday 01 June 2025 22:29:33 +0000 (0:00:00.329) 0:00:12.357 *********** 2025-06-01 22:29:34.943008 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:34.943507 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:34.946318 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:34.946403 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:34.947351 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:34.948401 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:34.948863 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:34.949751 | orchestrator | 2025-06-01 22:29:34.950668 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-01 22:29:34.951261 | orchestrator | Sunday 01 June 2025 22:29:34 +0000 (0:00:01.296) 0:00:13.653 *********** 2025-06-01 22:29:35.024885 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:29:35.053977 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:29:35.083059 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:29:35.116457 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:29:35.194365 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:29:35.195491 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:29:35.196127 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:29:35.196823 | orchestrator | 2025-06-01 22:29:35.197664 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-01 22:29:35.198409 | orchestrator | Sunday 01 June 2025 22:29:35 +0000 (0:00:00.252) 0:00:13.905 *********** 2025-06-01 22:29:35.722445 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:35.723574 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:35.723606 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:35.724551 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:35.725445 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:35.725981 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:35.726984 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:35.727819 | orchestrator | 2025-06-01 22:29:35.728423 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-01 22:29:35.728565 | orchestrator | Sunday 01 June 2025 22:29:35 +0000 (0:00:00.524) 0:00:14.430 *********** 2025-06-01 22:29:35.851767 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:29:35.888009 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:29:35.911426 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:29:35.981241 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:29:35.982077 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:29:35.982169 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:29:35.982936 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:29:35.983677 | orchestrator | 2025-06-01 22:29:35.983863 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-01 22:29:35.984552 | orchestrator | Sunday 01 June 2025 22:29:35 +0000 (0:00:00.262) 0:00:14.693 *********** 2025-06-01 22:29:36.524075 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:36.527085 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:29:36.527421 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:29:36.528285 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:29:36.529616 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:29:36.531128 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:29:36.531808 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:29:36.532583 | orchestrator | 2025-06-01 22:29:36.533194 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-01 22:29:36.533996 | orchestrator | Sunday 01 June 2025 22:29:36 +0000 (0:00:00.542) 0:00:15.235 *********** 2025-06-01 22:29:37.594229 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:37.594440 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:29:37.596979 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:29:37.598566 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:29:37.599530 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:29:37.600680 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:29:37.601354 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:29:37.602327 | orchestrator | 2025-06-01 22:29:37.602473 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-01 22:29:37.602973 | orchestrator | Sunday 01 June 2025 22:29:37 +0000 (0:00:01.069) 0:00:16.305 *********** 2025-06-01 22:29:39.707494 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:39.708750 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:39.710275 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:39.712336 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:39.713139 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:39.713550 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:39.714164 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:39.714973 | orchestrator | 2025-06-01 22:29:39.715963 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-01 22:29:39.718152 | orchestrator | Sunday 01 June 2025 22:29:39 +0000 (0:00:02.113) 0:00:18.418 *********** 2025-06-01 22:29:40.134645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:29:40.134805 | orchestrator | 2025-06-01 22:29:40.135458 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-01 22:29:40.136624 | orchestrator | Sunday 01 June 2025 22:29:40 +0000 (0:00:00.428) 0:00:18.846 *********** 2025-06-01 22:29:40.213709 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:29:41.396151 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:29:41.396709 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:29:41.398747 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:29:41.398777 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:29:41.399179 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:29:41.400840 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:29:41.401265 | orchestrator | 2025-06-01 22:29:41.402389 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-01 22:29:41.403280 | orchestrator | Sunday 01 June 2025 22:29:41 +0000 (0:00:01.259) 0:00:20.106 *********** 2025-06-01 22:29:41.495000 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:41.522302 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:41.546846 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:41.604791 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:41.605582 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:41.606972 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:41.607836 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:41.609068 | orchestrator | 2025-06-01 22:29:41.609127 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-01 22:29:41.609268 | orchestrator | Sunday 01 June 2025 22:29:41 +0000 (0:00:00.211) 0:00:20.317 *********** 2025-06-01 22:29:41.703189 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:41.730748 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:41.754431 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:41.820244 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:41.820430 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:41.822110 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:41.823116 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:41.826937 | orchestrator | 2025-06-01 22:29:41.827606 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-01 22:29:41.828909 | orchestrator | Sunday 01 June 2025 22:29:41 +0000 (0:00:00.215) 0:00:20.533 *********** 2025-06-01 22:29:41.901914 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:41.927533 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:41.969123 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:41.995657 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:42.057264 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:42.057448 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:42.057531 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:42.059766 | orchestrator | 2025-06-01 22:29:42.059906 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-01 22:29:42.060222 | orchestrator | Sunday 01 June 2025 22:29:42 +0000 (0:00:00.236) 0:00:20.769 *********** 2025-06-01 22:29:42.347550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:29:42.350554 | orchestrator | 2025-06-01 22:29:42.351860 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-01 22:29:42.352291 | orchestrator | Sunday 01 June 2025 22:29:42 +0000 (0:00:00.288) 0:00:21.058 *********** 2025-06-01 22:29:42.890297 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:42.890405 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:42.890421 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:42.890433 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:42.891020 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:42.891393 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:42.891769 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:42.892518 | orchestrator | 2025-06-01 22:29:42.893254 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-01 22:29:42.893387 | orchestrator | Sunday 01 June 2025 22:29:42 +0000 (0:00:00.538) 0:00:21.596 *********** 2025-06-01 22:29:42.963425 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:29:42.986986 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:29:43.040685 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:29:43.104597 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:29:43.104763 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:29:43.105658 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:29:43.109463 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:29:43.109655 | orchestrator | 2025-06-01 22:29:43.110385 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-01 22:29:43.111446 | orchestrator | Sunday 01 June 2025 22:29:43 +0000 (0:00:00.220) 0:00:21.817 *********** 2025-06-01 22:29:44.212028 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:44.212686 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:44.215193 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:44.216616 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:44.217700 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:29:44.218766 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:29:44.219639 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:29:44.220911 | orchestrator | 2025-06-01 22:29:44.222136 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-01 22:29:44.222872 | orchestrator | Sunday 01 June 2025 22:29:44 +0000 (0:00:01.104) 0:00:22.922 *********** 2025-06-01 22:29:44.776208 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:44.777567 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:44.779845 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:44.780650 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:44.781059 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:44.781948 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:44.782174 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:44.782858 | orchestrator | 2025-06-01 22:29:44.783331 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-01 22:29:44.783986 | orchestrator | Sunday 01 June 2025 22:29:44 +0000 (0:00:00.566) 0:00:23.488 *********** 2025-06-01 22:29:45.977738 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:45.977914 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:45.979811 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:45.979850 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:45.979863 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:29:45.980057 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:29:45.980830 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:29:45.982392 | orchestrator | 2025-06-01 22:29:45.982834 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-01 22:29:45.983013 | orchestrator | Sunday 01 June 2025 22:29:45 +0000 (0:00:01.199) 0:00:24.687 *********** 2025-06-01 22:29:59.112269 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:59.112376 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:59.112386 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:59.112394 | orchestrator | changed: [testbed-manager] 2025-06-01 22:29:59.112402 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:29:59.112913 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:29:59.114574 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:29:59.115014 | orchestrator | 2025-06-01 22:29:59.115729 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-01 22:29:59.116539 | orchestrator | Sunday 01 June 2025 22:29:59 +0000 (0:00:13.130) 0:00:37.818 *********** 2025-06-01 22:29:59.186296 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:59.227987 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:59.257636 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:59.297785 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:59.365011 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:59.365553 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:59.366452 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:59.367671 | orchestrator | 2025-06-01 22:29:59.367912 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-01 22:29:59.368918 | orchestrator | Sunday 01 June 2025 22:29:59 +0000 (0:00:00.259) 0:00:38.077 *********** 2025-06-01 22:29:59.471705 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:59.504282 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:59.525682 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:59.582457 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:59.582546 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:59.583397 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:59.584969 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:59.586172 | orchestrator | 2025-06-01 22:29:59.588646 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-01 22:29:59.589858 | orchestrator | Sunday 01 June 2025 22:29:59 +0000 (0:00:00.215) 0:00:38.293 *********** 2025-06-01 22:29:59.660071 | orchestrator | ok: [testbed-manager] 2025-06-01 22:29:59.694822 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:29:59.721594 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:29:59.746866 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:29:59.813723 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:29:59.814737 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:29:59.815886 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:29:59.816967 | orchestrator | 2025-06-01 22:29:59.817784 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-01 22:29:59.818890 | orchestrator | Sunday 01 June 2025 22:29:59 +0000 (0:00:00.232) 0:00:38.526 *********** 2025-06-01 22:30:00.134989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:30:00.135325 | orchestrator | 2025-06-01 22:30:00.136354 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-01 22:30:00.137284 | orchestrator | Sunday 01 June 2025 22:30:00 +0000 (0:00:00.319) 0:00:38.845 *********** 2025-06-01 22:30:01.742981 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:01.744552 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:01.745785 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:01.747372 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:01.748448 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:01.750224 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:01.751528 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:01.751803 | orchestrator | 2025-06-01 22:30:01.752819 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-01 22:30:01.753456 | orchestrator | Sunday 01 June 2025 22:30:01 +0000 (0:00:01.607) 0:00:40.452 *********** 2025-06-01 22:30:02.851677 | orchestrator | changed: [testbed-manager] 2025-06-01 22:30:02.851779 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:30:02.854215 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:30:02.854989 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:30:02.858816 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:30:02.859832 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:30:02.861333 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:30:02.862692 | orchestrator | 2025-06-01 22:30:02.863342 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-01 22:30:02.863807 | orchestrator | Sunday 01 June 2025 22:30:02 +0000 (0:00:01.105) 0:00:41.558 *********** 2025-06-01 22:30:03.657840 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:03.658662 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:03.659841 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:03.660995 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:03.662285 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:03.662820 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:03.665250 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:03.666585 | orchestrator | 2025-06-01 22:30:03.666700 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-01 22:30:03.667423 | orchestrator | Sunday 01 June 2025 22:30:03 +0000 (0:00:00.808) 0:00:42.367 *********** 2025-06-01 22:30:03.974283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:30:03.974495 | orchestrator | 2025-06-01 22:30:03.974947 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-01 22:30:03.975640 | orchestrator | Sunday 01 June 2025 22:30:03 +0000 (0:00:00.319) 0:00:42.686 *********** 2025-06-01 22:30:05.002239 | orchestrator | changed: [testbed-manager] 2025-06-01 22:30:05.005205 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:30:05.005296 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:30:05.005311 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:30:05.005992 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:30:05.006222 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:30:05.006763 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:30:05.007878 | orchestrator | 2025-06-01 22:30:05.008559 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-01 22:30:05.009177 | orchestrator | Sunday 01 June 2025 22:30:04 +0000 (0:00:01.025) 0:00:43.712 *********** 2025-06-01 22:30:05.106127 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:30:05.131543 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:30:05.163728 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:30:05.338363 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:30:05.341428 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:30:05.342342 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:30:05.343428 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:30:05.344891 | orchestrator | 2025-06-01 22:30:05.347559 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-01 22:30:05.347587 | orchestrator | Sunday 01 June 2025 22:30:05 +0000 (0:00:00.336) 0:00:44.049 *********** 2025-06-01 22:30:18.234639 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:30:18.234756 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:30:18.234771 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:30:18.234783 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:30:18.234794 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:30:18.236766 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:30:18.236802 | orchestrator | changed: [testbed-manager] 2025-06-01 22:30:18.236816 | orchestrator | 2025-06-01 22:30:18.237602 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-01 22:30:18.238254 | orchestrator | Sunday 01 June 2025 22:30:18 +0000 (0:00:12.890) 0:00:56.939 *********** 2025-06-01 22:30:19.655314 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:19.656087 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:19.656111 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:19.657761 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:19.658558 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:19.661468 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:19.661495 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:19.661702 | orchestrator | 2025-06-01 22:30:19.666774 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-01 22:30:19.668815 | orchestrator | Sunday 01 June 2025 22:30:19 +0000 (0:00:01.427) 0:00:58.367 *********** 2025-06-01 22:30:20.530801 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:20.533110 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:20.533144 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:20.533156 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:20.533791 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:20.533812 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:20.534603 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:20.534840 | orchestrator | 2025-06-01 22:30:20.535854 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-01 22:30:20.535951 | orchestrator | Sunday 01 June 2025 22:30:20 +0000 (0:00:00.874) 0:00:59.242 *********** 2025-06-01 22:30:20.643055 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:20.669614 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:20.701582 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:20.725683 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:20.792120 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:20.793235 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:20.796455 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:20.796489 | orchestrator | 2025-06-01 22:30:20.796504 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-01 22:30:20.796518 | orchestrator | Sunday 01 June 2025 22:30:20 +0000 (0:00:00.262) 0:00:59.504 *********** 2025-06-01 22:30:20.894143 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:20.922737 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:20.954162 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:21.028140 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:21.029686 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:21.033696 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:21.033726 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:21.033738 | orchestrator | 2025-06-01 22:30:21.033751 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-01 22:30:21.034841 | orchestrator | Sunday 01 June 2025 22:30:21 +0000 (0:00:00.236) 0:00:59.740 *********** 2025-06-01 22:30:21.343525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:30:21.346186 | orchestrator | 2025-06-01 22:30:21.346867 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-01 22:30:21.347557 | orchestrator | Sunday 01 June 2025 22:30:21 +0000 (0:00:00.312) 0:01:00.053 *********** 2025-06-01 22:30:22.956101 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:22.957179 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:22.959499 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:22.964314 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:22.965697 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:22.966815 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:22.967576 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:22.968587 | orchestrator | 2025-06-01 22:30:22.969581 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-01 22:30:22.970356 | orchestrator | Sunday 01 June 2025 22:30:22 +0000 (0:00:01.612) 0:01:01.666 *********** 2025-06-01 22:30:23.530284 | orchestrator | changed: [testbed-manager] 2025-06-01 22:30:23.531621 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:30:23.532128 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:30:23.533240 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:30:23.534202 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:30:23.535336 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:30:23.535438 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:30:23.535970 | orchestrator | 2025-06-01 22:30:23.536298 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-01 22:30:23.536801 | orchestrator | Sunday 01 June 2025 22:30:23 +0000 (0:00:00.575) 0:01:02.241 *********** 2025-06-01 22:30:23.612749 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:23.638799 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:23.667219 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:23.695136 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:23.766616 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:23.767470 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:23.768414 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:23.769625 | orchestrator | 2025-06-01 22:30:23.769799 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-01 22:30:23.770736 | orchestrator | Sunday 01 June 2025 22:30:23 +0000 (0:00:00.237) 0:01:02.479 *********** 2025-06-01 22:30:24.839449 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:24.840108 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:24.842079 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:24.843219 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:24.844438 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:24.845530 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:24.846353 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:24.847138 | orchestrator | 2025-06-01 22:30:24.847922 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-01 22:30:24.848623 | orchestrator | Sunday 01 June 2025 22:30:24 +0000 (0:00:01.071) 0:01:03.550 *********** 2025-06-01 22:30:26.485496 | orchestrator | changed: [testbed-manager] 2025-06-01 22:30:26.486571 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:30:26.487762 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:30:26.489232 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:30:26.489807 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:30:26.490811 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:30:26.491671 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:30:26.492741 | orchestrator | 2025-06-01 22:30:26.493335 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-01 22:30:26.493970 | orchestrator | Sunday 01 June 2025 22:30:26 +0000 (0:00:01.645) 0:01:05.196 *********** 2025-06-01 22:30:28.673072 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:28.673585 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:28.675500 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:28.675681 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:28.676898 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:28.677381 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:28.678372 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:28.679252 | orchestrator | 2025-06-01 22:30:28.679831 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-01 22:30:28.680237 | orchestrator | Sunday 01 June 2025 22:30:28 +0000 (0:00:02.187) 0:01:07.383 *********** 2025-06-01 22:31:06.160113 | orchestrator | ok: [testbed-manager] 2025-06-01 22:31:06.160226 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:31:06.160305 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:31:06.161328 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:31:06.161834 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:31:06.163983 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:31:06.164663 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:31:06.165563 | orchestrator | 2025-06-01 22:31:06.166237 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-01 22:31:06.166679 | orchestrator | Sunday 01 June 2025 22:31:06 +0000 (0:00:37.485) 0:01:44.869 *********** 2025-06-01 22:32:20.551429 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:20.551531 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:20.551546 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:20.551558 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:20.551570 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:20.551581 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:20.551592 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:20.553841 | orchestrator | 2025-06-01 22:32:20.554444 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-01 22:32:20.555516 | orchestrator | Sunday 01 June 2025 22:32:20 +0000 (0:01:14.386) 0:02:59.255 *********** 2025-06-01 22:32:22.283278 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:22.285237 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:22.285309 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:22.287050 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:22.287572 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:22.288723 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:22.289643 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:22.290815 | orchestrator | 2025-06-01 22:32:22.292097 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-01 22:32:22.293066 | orchestrator | Sunday 01 June 2025 22:32:22 +0000 (0:00:01.737) 0:03:00.993 *********** 2025-06-01 22:32:35.687891 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:35.688091 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:35.688114 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:35.689181 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:35.690260 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:35.691112 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:35.692078 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:35.692703 | orchestrator | 2025-06-01 22:32:35.693204 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-01 22:32:35.694217 | orchestrator | Sunday 01 June 2025 22:32:35 +0000 (0:00:13.401) 0:03:14.394 *********** 2025-06-01 22:32:36.147881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-01 22:32:36.148072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-01 22:32:36.149104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-01 22:32:36.150420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-01 22:32:36.151575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-01 22:32:36.152230 | orchestrator | 2025-06-01 22:32:36.153437 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-01 22:32:36.153865 | orchestrator | Sunday 01 June 2025 22:32:36 +0000 (0:00:00.462) 0:03:14.857 *********** 2025-06-01 22:32:36.206654 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-01 22:32:36.206785 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-01 22:32:36.238842 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:36.270507 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-01 22:32:36.270611 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:32:36.299883 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:32:36.300038 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-01 22:32:36.323698 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:32:36.867893 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 22:32:36.868546 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 22:32:36.868787 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 22:32:36.870421 | orchestrator | 2025-06-01 22:32:36.870871 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-01 22:32:36.871100 | orchestrator | Sunday 01 June 2025 22:32:36 +0000 (0:00:00.721) 0:03:15.578 *********** 2025-06-01 22:32:36.971755 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-01 22:32:36.972946 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-01 22:32:36.977091 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-01 22:32:36.977120 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-01 22:32:36.977161 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-01 22:32:36.977175 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-01 22:32:36.977186 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-01 22:32:36.977197 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-01 22:32:36.977209 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-01 22:32:36.977219 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-01 22:32:36.977230 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-01 22:32:36.977287 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-01 22:32:36.977901 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-01 22:32:36.979514 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-01 22:32:37.021840 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-01 22:32:37.023030 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-01 22:32:37.024193 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-01 22:32:37.025356 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:37.026332 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-01 22:32:37.027171 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-01 22:32:37.027824 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-01 22:32:37.028835 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-01 22:32:37.029581 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-01 22:32:37.060882 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-01 22:32:37.061573 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:32:37.062290 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-01 22:32:37.063134 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-01 22:32:37.063834 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-01 22:32:37.064469 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-01 22:32:37.065286 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-01 22:32:37.065602 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-01 22:32:37.066476 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-01 22:32:37.067243 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-01 22:32:37.068128 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-01 22:32:37.099494 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:32:37.100307 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-01 22:32:37.101223 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-01 22:32:37.101787 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-01 22:32:37.102236 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-01 22:32:37.102712 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-01 22:32:37.103206 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-01 22:32:37.103841 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-01 22:32:37.104051 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-01 22:32:37.126357 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:32:41.488423 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-01 22:32:41.488532 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-01 22:32:41.489640 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-01 22:32:41.491134 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-01 22:32:41.493620 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-01 22:32:41.495319 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-01 22:32:41.496287 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-01 22:32:41.497602 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-01 22:32:41.499035 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-01 22:32:41.500069 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-01 22:32:41.500965 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-01 22:32:41.502003 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-01 22:32:41.502815 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-01 22:32:41.503593 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-01 22:32:41.504282 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-01 22:32:41.506121 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-01 22:32:41.506950 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-01 22:32:41.507750 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-01 22:32:41.508378 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-01 22:32:41.509046 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-01 22:32:41.509743 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-01 22:32:41.510767 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-01 22:32:41.511225 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-01 22:32:41.511940 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-01 22:32:41.513391 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-01 22:32:41.514313 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-01 22:32:41.514836 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-01 22:32:41.515480 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-01 22:32:41.516514 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-01 22:32:41.519985 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-01 22:32:41.520842 | orchestrator | 2025-06-01 22:32:41.521558 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-01 22:32:41.523065 | orchestrator | Sunday 01 June 2025 22:32:41 +0000 (0:00:04.618) 0:03:20.197 *********** 2025-06-01 22:32:42.048796 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:32:42.048959 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:32:42.049628 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:32:42.050433 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:32:42.051012 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:32:42.051496 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:32:42.052966 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:32:42.053294 | orchestrator | 2025-06-01 22:32:42.054564 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-01 22:32:42.055119 | orchestrator | Sunday 01 June 2025 22:32:42 +0000 (0:00:00.564) 0:03:20.762 *********** 2025-06-01 22:32:42.114218 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-01 22:32:42.145008 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:42.226455 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-01 22:32:42.226618 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-01 22:32:42.562628 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:32:42.563522 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:32:42.564755 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-01 22:32:42.566155 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:32:42.567431 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-01 22:32:42.568577 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-01 22:32:42.569092 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-01 22:32:42.569538 | orchestrator | 2025-06-01 22:32:42.570310 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-01 22:32:42.570829 | orchestrator | Sunday 01 June 2025 22:32:42 +0000 (0:00:00.511) 0:03:21.274 *********** 2025-06-01 22:32:42.622522 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-01 22:32:42.653876 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:42.736686 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-01 22:32:43.151085 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:32:43.151292 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-01 22:32:43.151444 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:32:43.151940 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-01 22:32:43.152325 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:32:43.153879 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-01 22:32:43.154213 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-01 22:32:43.154900 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-01 22:32:43.155624 | orchestrator | 2025-06-01 22:32:43.156066 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-01 22:32:43.156404 | orchestrator | Sunday 01 June 2025 22:32:43 +0000 (0:00:00.590) 0:03:21.864 *********** 2025-06-01 22:32:43.240698 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:43.271270 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:32:43.293905 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:32:43.324375 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:32:43.467696 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:32:43.468814 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:32:43.470366 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:32:43.471526 | orchestrator | 2025-06-01 22:32:43.472622 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-01 22:32:43.473753 | orchestrator | Sunday 01 June 2025 22:32:43 +0000 (0:00:00.314) 0:03:22.179 *********** 2025-06-01 22:32:49.082740 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:49.083561 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:49.085685 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:49.086932 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:49.087775 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:49.088460 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:49.089606 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:49.090099 | orchestrator | 2025-06-01 22:32:49.090942 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-01 22:32:49.091844 | orchestrator | Sunday 01 June 2025 22:32:49 +0000 (0:00:05.615) 0:03:27.795 *********** 2025-06-01 22:32:49.154470 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-01 22:32:49.185683 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-01 22:32:49.185969 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:49.224120 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:32:49.224880 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-01 22:32:49.225925 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-01 22:32:49.261594 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:32:49.293880 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:32:49.294379 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-01 22:32:49.377760 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:32:49.378465 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-01 22:32:49.378646 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:32:49.380447 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-01 22:32:49.380899 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:32:49.381583 | orchestrator | 2025-06-01 22:32:49.382254 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-01 22:32:49.382556 | orchestrator | Sunday 01 June 2025 22:32:49 +0000 (0:00:00.296) 0:03:28.091 *********** 2025-06-01 22:32:50.407506 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-01 22:32:50.407665 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-01 22:32:50.408411 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-01 22:32:50.409135 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-01 22:32:50.409873 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-01 22:32:50.410277 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-01 22:32:50.410873 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-01 22:32:50.411524 | orchestrator | 2025-06-01 22:32:50.412198 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-01 22:32:50.412632 | orchestrator | Sunday 01 June 2025 22:32:50 +0000 (0:00:01.026) 0:03:29.118 *********** 2025-06-01 22:32:50.954731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:32:50.955552 | orchestrator | 2025-06-01 22:32:50.956571 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-01 22:32:50.957518 | orchestrator | Sunday 01 June 2025 22:32:50 +0000 (0:00:00.547) 0:03:29.665 *********** 2025-06-01 22:32:52.159840 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:52.160811 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:52.161387 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:52.162504 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:52.162959 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:52.164405 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:52.165182 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:52.165525 | orchestrator | 2025-06-01 22:32:52.166375 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-01 22:32:52.167101 | orchestrator | Sunday 01 June 2025 22:32:52 +0000 (0:00:01.205) 0:03:30.871 *********** 2025-06-01 22:32:52.807583 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:52.807724 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:52.807909 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:52.809271 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:52.810110 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:52.814196 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:52.814234 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:52.815200 | orchestrator | 2025-06-01 22:32:52.816020 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-01 22:32:52.817145 | orchestrator | Sunday 01 June 2025 22:32:52 +0000 (0:00:00.644) 0:03:31.516 *********** 2025-06-01 22:32:53.428504 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:53.428610 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:53.430400 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:53.431198 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:53.433041 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:53.433623 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:53.434574 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:53.435418 | orchestrator | 2025-06-01 22:32:53.436329 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-01 22:32:53.436938 | orchestrator | Sunday 01 June 2025 22:32:53 +0000 (0:00:00.620) 0:03:32.137 *********** 2025-06-01 22:32:54.036043 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:54.036215 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:54.037521 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:54.038244 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:54.039310 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:54.039353 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:54.040041 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:54.040709 | orchestrator | 2025-06-01 22:32:54.041373 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-01 22:32:54.041882 | orchestrator | Sunday 01 June 2025 22:32:54 +0000 (0:00:00.608) 0:03:32.746 *********** 2025-06-01 22:32:54.985509 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815731.8546665, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.986185 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815777.013917, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.989478 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815785.636344, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.989525 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815785.5381446, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.989538 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815791.810599, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.989606 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815792.069253, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.989732 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815793.2139485, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.991016 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815756.0739214, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.991933 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815679.567433, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.992927 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815684.887356, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.994119 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815690.5761108, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.994982 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815682.010597, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.995774 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815687.8069525, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.996836 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815694.1020691, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:32:54.997525 | orchestrator | 2025-06-01 22:32:54.998548 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-01 22:32:54.999072 | orchestrator | Sunday 01 June 2025 22:32:54 +0000 (0:00:00.950) 0:03:33.697 *********** 2025-06-01 22:32:56.122321 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:56.125231 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:56.126530 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:56.127421 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:56.128470 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:56.129280 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:56.130328 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:56.131658 | orchestrator | 2025-06-01 22:32:56.132948 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-01 22:32:56.134272 | orchestrator | Sunday 01 June 2025 22:32:56 +0000 (0:00:01.135) 0:03:34.833 *********** 2025-06-01 22:32:57.216109 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:57.219464 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:57.219513 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:57.219524 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:57.219943 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:57.221103 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:57.221981 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:57.223933 | orchestrator | 2025-06-01 22:32:57.224231 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-01 22:32:57.225462 | orchestrator | Sunday 01 June 2025 22:32:57 +0000 (0:00:01.093) 0:03:35.926 *********** 2025-06-01 22:32:58.375111 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:58.377327 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:58.379129 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:58.379982 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:58.381359 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:58.382314 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:58.383326 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:58.384998 | orchestrator | 2025-06-01 22:32:58.386277 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-01 22:32:58.387178 | orchestrator | Sunday 01 June 2025 22:32:58 +0000 (0:00:01.158) 0:03:37.085 *********** 2025-06-01 22:32:58.489366 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:58.539651 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:32:58.576069 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:32:58.610862 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:32:58.666613 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:32:58.668333 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:32:58.669926 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:32:58.671485 | orchestrator | 2025-06-01 22:32:58.673217 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-01 22:32:58.674553 | orchestrator | Sunday 01 June 2025 22:32:58 +0000 (0:00:00.293) 0:03:37.378 *********** 2025-06-01 22:32:59.376043 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:59.377549 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:59.379775 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:59.382281 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:59.383455 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:59.384657 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:59.385467 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:59.386546 | orchestrator | 2025-06-01 22:32:59.388103 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-01 22:32:59.388131 | orchestrator | Sunday 01 June 2025 22:32:59 +0000 (0:00:00.707) 0:03:38.086 *********** 2025-06-01 22:32:59.781329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:32:59.782906 | orchestrator | 2025-06-01 22:32:59.784036 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-01 22:32:59.785841 | orchestrator | Sunday 01 June 2025 22:32:59 +0000 (0:00:00.408) 0:03:38.494 *********** 2025-06-01 22:33:07.410935 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:07.411859 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:07.413122 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:07.415193 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:07.415427 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:07.417782 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:07.418486 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:07.419251 | orchestrator | 2025-06-01 22:33:07.420483 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-01 22:33:07.420505 | orchestrator | Sunday 01 June 2025 22:33:07 +0000 (0:00:07.628) 0:03:46.122 *********** 2025-06-01 22:33:08.510273 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:08.512366 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:08.512895 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:08.513249 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:08.513813 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:08.514263 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:08.515119 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:08.515391 | orchestrator | 2025-06-01 22:33:08.516720 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-01 22:33:08.516753 | orchestrator | Sunday 01 June 2025 22:33:08 +0000 (0:00:01.098) 0:03:47.221 *********** 2025-06-01 22:33:09.557364 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:09.560198 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:09.560284 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:09.561766 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:09.562611 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:09.564003 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:09.565053 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:09.565828 | orchestrator | 2025-06-01 22:33:09.566948 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-01 22:33:09.567022 | orchestrator | Sunday 01 June 2025 22:33:09 +0000 (0:00:01.047) 0:03:48.268 *********** 2025-06-01 22:33:10.102154 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:33:10.102255 | orchestrator | 2025-06-01 22:33:10.102329 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-01 22:33:10.102732 | orchestrator | Sunday 01 June 2025 22:33:10 +0000 (0:00:00.547) 0:03:48.815 *********** 2025-06-01 22:33:18.364679 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:18.366244 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:18.368258 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:18.369618 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:18.371054 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:18.372562 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:18.373437 | orchestrator | changed: [testbed-manager] 2025-06-01 22:33:18.374506 | orchestrator | 2025-06-01 22:33:18.375297 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-01 22:33:18.376156 | orchestrator | Sunday 01 June 2025 22:33:18 +0000 (0:00:08.258) 0:03:57.073 *********** 2025-06-01 22:33:19.022765 | orchestrator | changed: [testbed-manager] 2025-06-01 22:33:19.024452 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:19.026229 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:19.027555 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:19.028722 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:19.030269 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:19.030883 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:19.031960 | orchestrator | 2025-06-01 22:33:19.032950 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-01 22:33:19.033636 | orchestrator | Sunday 01 June 2025 22:33:19 +0000 (0:00:00.658) 0:03:57.732 *********** 2025-06-01 22:33:20.134348 | orchestrator | changed: [testbed-manager] 2025-06-01 22:33:20.136016 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:20.139140 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:20.139811 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:20.141083 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:20.142227 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:20.143154 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:20.143540 | orchestrator | 2025-06-01 22:33:20.144304 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-01 22:33:20.145229 | orchestrator | Sunday 01 June 2025 22:33:20 +0000 (0:00:01.113) 0:03:58.846 *********** 2025-06-01 22:33:21.201229 | orchestrator | changed: [testbed-manager] 2025-06-01 22:33:21.205481 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:21.205584 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:21.206843 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:21.207983 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:21.208548 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:21.209852 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:21.210725 | orchestrator | 2025-06-01 22:33:21.213188 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-01 22:33:21.213225 | orchestrator | Sunday 01 June 2025 22:33:21 +0000 (0:00:01.063) 0:03:59.909 *********** 2025-06-01 22:33:21.317691 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:21.356424 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:21.389937 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:21.429689 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:21.503371 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:21.504521 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:21.506223 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:21.507200 | orchestrator | 2025-06-01 22:33:21.508319 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-01 22:33:21.509009 | orchestrator | Sunday 01 June 2025 22:33:21 +0000 (0:00:00.305) 0:04:00.215 *********** 2025-06-01 22:33:21.648869 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:21.693410 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:21.733400 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:21.775527 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:21.859821 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:21.860420 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:21.861186 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:21.862392 | orchestrator | 2025-06-01 22:33:21.862706 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-01 22:33:21.863725 | orchestrator | Sunday 01 June 2025 22:33:21 +0000 (0:00:00.357) 0:04:00.572 *********** 2025-06-01 22:33:21.963595 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:21.999357 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:22.032019 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:22.071204 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:22.167391 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:22.167946 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:22.169108 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:22.170440 | orchestrator | 2025-06-01 22:33:22.171415 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-01 22:33:22.174067 | orchestrator | Sunday 01 June 2025 22:33:22 +0000 (0:00:00.307) 0:04:00.880 *********** 2025-06-01 22:33:27.694183 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:27.694470 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:27.695429 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:27.697202 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:27.698381 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:27.699257 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:27.700359 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:27.701225 | orchestrator | 2025-06-01 22:33:27.702835 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-01 22:33:27.702861 | orchestrator | Sunday 01 June 2025 22:33:27 +0000 (0:00:05.526) 0:04:06.406 *********** 2025-06-01 22:33:28.099202 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:33:28.100080 | orchestrator | 2025-06-01 22:33:28.100287 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-01 22:33:28.100776 | orchestrator | Sunday 01 June 2025 22:33:28 +0000 (0:00:00.405) 0:04:06.812 *********** 2025-06-01 22:33:28.197581 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-01 22:33:28.198656 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-01 22:33:28.199400 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-01 22:33:28.200152 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-01 22:33:28.238333 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:28.238826 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-01 22:33:28.295167 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:28.295227 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-01 22:33:28.295482 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-01 22:33:28.340107 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:33:28.342140 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-01 22:33:28.343961 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-01 22:33:28.344016 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-01 22:33:28.387923 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:33:28.388016 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-01 22:33:28.388032 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-01 22:33:28.468601 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:33:28.469521 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:33:28.469554 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-01 22:33:28.469612 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-01 22:33:28.469941 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:33:28.470836 | orchestrator | 2025-06-01 22:33:28.471288 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-01 22:33:28.472079 | orchestrator | Sunday 01 June 2025 22:33:28 +0000 (0:00:00.366) 0:04:07.179 *********** 2025-06-01 22:33:28.946381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:33:28.946966 | orchestrator | 2025-06-01 22:33:28.946997 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-01 22:33:28.947317 | orchestrator | Sunday 01 June 2025 22:33:28 +0000 (0:00:00.479) 0:04:07.659 *********** 2025-06-01 22:33:29.028106 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-01 22:33:29.073538 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:29.074381 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-01 22:33:29.113279 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-01 22:33:29.113476 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:29.114263 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-01 22:33:29.147346 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:33:29.191312 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-01 22:33:29.192165 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:33:29.192452 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-01 22:33:29.278552 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:33:29.279329 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:33:29.279730 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-01 22:33:29.280913 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:33:29.282077 | orchestrator | 2025-06-01 22:33:29.282875 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-01 22:33:29.283226 | orchestrator | Sunday 01 June 2025 22:33:29 +0000 (0:00:00.332) 0:04:07.991 *********** 2025-06-01 22:33:29.870121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:33:29.873394 | orchestrator | 2025-06-01 22:33:29.873493 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-01 22:33:29.873509 | orchestrator | Sunday 01 June 2025 22:33:29 +0000 (0:00:00.589) 0:04:08.580 *********** 2025-06-01 22:34:03.329085 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:03.329195 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:03.329276 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:03.330152 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:03.331545 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:03.332399 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:03.333177 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:03.333810 | orchestrator | 2025-06-01 22:34:03.334492 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-01 22:34:03.335236 | orchestrator | Sunday 01 June 2025 22:34:03 +0000 (0:00:33.459) 0:04:42.040 *********** 2025-06-01 22:34:10.817984 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:10.820933 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:10.822645 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:10.823075 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:10.823933 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:10.824134 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:10.824880 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:10.825681 | orchestrator | 2025-06-01 22:34:10.826266 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-01 22:34:10.826957 | orchestrator | Sunday 01 June 2025 22:34:10 +0000 (0:00:07.487) 0:04:49.527 *********** 2025-06-01 22:34:17.853969 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:17.854443 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:17.856781 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:17.857670 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:17.859043 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:17.860962 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:17.862147 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:17.863251 | orchestrator | 2025-06-01 22:34:17.864390 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-01 22:34:17.865252 | orchestrator | Sunday 01 June 2025 22:34:17 +0000 (0:00:07.038) 0:04:56.566 *********** 2025-06-01 22:34:19.469111 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:19.470620 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:19.470995 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:19.471441 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:19.471985 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:19.472451 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:19.472933 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:19.473640 | orchestrator | 2025-06-01 22:34:19.474139 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-01 22:34:19.475576 | orchestrator | Sunday 01 June 2025 22:34:19 +0000 (0:00:01.612) 0:04:58.178 *********** 2025-06-01 22:34:24.830129 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:24.830683 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:24.831745 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:24.833423 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:24.834453 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:24.835484 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:24.836103 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:24.836613 | orchestrator | 2025-06-01 22:34:24.837280 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-01 22:34:24.837696 | orchestrator | Sunday 01 June 2025 22:34:24 +0000 (0:00:05.362) 0:05:03.541 *********** 2025-06-01 22:34:25.278612 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:34:25.278779 | orchestrator | 2025-06-01 22:34:25.279379 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-01 22:34:25.280290 | orchestrator | Sunday 01 June 2025 22:34:25 +0000 (0:00:00.448) 0:05:03.990 *********** 2025-06-01 22:34:26.002818 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:26.002921 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:26.003353 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:26.004480 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:26.005827 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:26.005850 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:26.007391 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:26.007482 | orchestrator | 2025-06-01 22:34:26.008450 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-01 22:34:26.009022 | orchestrator | Sunday 01 June 2025 22:34:25 +0000 (0:00:00.722) 0:05:04.712 *********** 2025-06-01 22:34:27.520298 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:27.521247 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:27.525108 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:27.525716 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:27.526936 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:27.529492 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:27.532294 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:27.532518 | orchestrator | 2025-06-01 22:34:27.533472 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-01 22:34:27.535064 | orchestrator | Sunday 01 June 2025 22:34:27 +0000 (0:00:01.517) 0:05:06.230 *********** 2025-06-01 22:34:28.277475 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:28.277692 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:28.277784 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:28.278715 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:28.279078 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:28.280360 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:28.280458 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:28.280949 | orchestrator | 2025-06-01 22:34:28.281241 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-01 22:34:28.281973 | orchestrator | Sunday 01 June 2025 22:34:28 +0000 (0:00:00.758) 0:05:06.989 *********** 2025-06-01 22:34:28.351320 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:34:28.383449 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:34:28.440321 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:34:28.474980 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:34:28.521203 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:34:28.601359 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:34:28.601668 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:34:28.602235 | orchestrator | 2025-06-01 22:34:28.604788 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-01 22:34:28.604818 | orchestrator | Sunday 01 June 2025 22:34:28 +0000 (0:00:00.324) 0:05:07.313 *********** 2025-06-01 22:34:28.683035 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:34:28.716614 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:34:28.751388 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:34:28.785801 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:34:28.814696 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:34:29.004265 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:34:29.004679 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:34:29.005159 | orchestrator | 2025-06-01 22:34:29.006551 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-01 22:34:29.007425 | orchestrator | Sunday 01 June 2025 22:34:28 +0000 (0:00:00.401) 0:05:07.715 *********** 2025-06-01 22:34:29.110724 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:29.147060 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:29.183222 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:29.222079 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:29.314281 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:29.314691 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:29.316483 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:29.318457 | orchestrator | 2025-06-01 22:34:29.320165 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-01 22:34:29.321612 | orchestrator | Sunday 01 June 2025 22:34:29 +0000 (0:00:00.310) 0:05:08.026 *********** 2025-06-01 22:34:29.435903 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:34:29.475393 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:34:29.512985 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:34:29.548433 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:34:29.611718 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:34:29.611827 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:34:29.612698 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:34:29.614378 | orchestrator | 2025-06-01 22:34:29.615938 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-01 22:34:29.617245 | orchestrator | Sunday 01 June 2025 22:34:29 +0000 (0:00:00.298) 0:05:08.324 *********** 2025-06-01 22:34:29.736087 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:29.777252 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:29.833023 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:29.876477 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:29.955268 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:29.955449 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:29.956319 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:29.957041 | orchestrator | 2025-06-01 22:34:29.959103 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-01 22:34:29.959706 | orchestrator | Sunday 01 June 2025 22:34:29 +0000 (0:00:00.342) 0:05:08.667 *********** 2025-06-01 22:34:30.072459 | orchestrator | ok: [testbed-manager] => { 2025-06-01 22:34:30.072862 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-01 22:34:30.073393 | orchestrator | } 2025-06-01 22:34:30.107711 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 22:34:30.107887 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-01 22:34:30.107966 | orchestrator | } 2025-06-01 22:34:30.140428 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 22:34:30.140495 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-01 22:34:30.141445 | orchestrator | } 2025-06-01 22:34:30.176363 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 22:34:30.176830 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-01 22:34:30.177699 | orchestrator | } 2025-06-01 22:34:30.254216 | orchestrator | ok: [testbed-node-0] => { 2025-06-01 22:34:30.255048 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-01 22:34:30.257388 | orchestrator | } 2025-06-01 22:34:30.258173 | orchestrator | ok: [testbed-node-1] => { 2025-06-01 22:34:30.259665 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-01 22:34:30.260718 | orchestrator | } 2025-06-01 22:34:30.260951 | orchestrator | ok: [testbed-node-2] => { 2025-06-01 22:34:30.262437 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-01 22:34:30.263509 | orchestrator | } 2025-06-01 22:34:30.264313 | orchestrator | 2025-06-01 22:34:30.264967 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-01 22:34:30.265502 | orchestrator | Sunday 01 June 2025 22:34:30 +0000 (0:00:00.300) 0:05:08.967 *********** 2025-06-01 22:34:30.376988 | orchestrator | ok: [testbed-manager] => { 2025-06-01 22:34:30.377091 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-01 22:34:30.378167 | orchestrator | } 2025-06-01 22:34:30.519181 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 22:34:30.519377 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-01 22:34:30.519982 | orchestrator | } 2025-06-01 22:34:30.554610 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 22:34:30.555188 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-01 22:34:30.555218 | orchestrator | } 2025-06-01 22:34:30.605515 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 22:34:30.605724 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-01 22:34:30.606700 | orchestrator | } 2025-06-01 22:34:30.681650 | orchestrator | ok: [testbed-node-0] => { 2025-06-01 22:34:30.682082 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-01 22:34:30.684848 | orchestrator | } 2025-06-01 22:34:30.685783 | orchestrator | ok: [testbed-node-1] => { 2025-06-01 22:34:30.690739 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-01 22:34:30.695033 | orchestrator | } 2025-06-01 22:34:30.695814 | orchestrator | ok: [testbed-node-2] => { 2025-06-01 22:34:30.697550 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-01 22:34:30.698112 | orchestrator | } 2025-06-01 22:34:30.698997 | orchestrator | 2025-06-01 22:34:30.701600 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-01 22:34:30.705349 | orchestrator | Sunday 01 June 2025 22:34:30 +0000 (0:00:00.425) 0:05:09.392 *********** 2025-06-01 22:34:30.764601 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:34:30.798476 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:34:30.831669 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:34:30.878330 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:34:30.972637 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:34:30.973955 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:34:30.974951 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:34:30.976932 | orchestrator | 2025-06-01 22:34:30.977799 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-01 22:34:30.978225 | orchestrator | Sunday 01 June 2025 22:34:30 +0000 (0:00:00.293) 0:05:09.686 *********** 2025-06-01 22:34:31.112089 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:34:31.147576 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:34:31.183858 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:34:31.218428 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:34:31.301297 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:34:31.301400 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:34:31.302629 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:34:31.303537 | orchestrator | 2025-06-01 22:34:31.304350 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-01 22:34:31.305006 | orchestrator | Sunday 01 June 2025 22:34:31 +0000 (0:00:00.326) 0:05:10.013 *********** 2025-06-01 22:34:31.752424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:34:31.752644 | orchestrator | 2025-06-01 22:34:31.753369 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-01 22:34:31.754170 | orchestrator | Sunday 01 June 2025 22:34:31 +0000 (0:00:00.451) 0:05:10.464 *********** 2025-06-01 22:34:32.587004 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:32.587341 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:32.587687 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:32.588460 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:32.589307 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:32.590696 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:32.591722 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:32.592507 | orchestrator | 2025-06-01 22:34:32.593381 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-01 22:34:32.594672 | orchestrator | Sunday 01 June 2025 22:34:32 +0000 (0:00:00.833) 0:05:11.297 *********** 2025-06-01 22:34:35.452207 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:35.452767 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:35.453289 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:35.453975 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:35.454442 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:35.455707 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:35.456103 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:35.456689 | orchestrator | 2025-06-01 22:34:35.457497 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-01 22:34:35.458106 | orchestrator | Sunday 01 June 2025 22:34:35 +0000 (0:00:02.866) 0:05:14.164 *********** 2025-06-01 22:34:35.534566 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-01 22:34:35.535075 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-01 22:34:35.635431 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-01 22:34:35.635662 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-01 22:34:35.636215 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-01 22:34:35.636759 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-01 22:34:35.721340 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:34:35.721451 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-01 22:34:35.722399 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-01 22:34:35.723863 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-01 22:34:35.945046 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:34:35.946096 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-01 22:34:35.946955 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-01 22:34:35.948469 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-01 22:34:36.031725 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:34:36.033015 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-01 22:34:36.034477 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-01 22:34:36.035255 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-01 22:34:36.142086 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:34:36.142184 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-01 22:34:36.143073 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-01 22:34:36.143470 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-01 22:34:36.289294 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:34:36.289382 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:34:36.292043 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-01 22:34:36.295001 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-01 22:34:36.295584 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-01 22:34:36.296298 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:34:36.297303 | orchestrator | 2025-06-01 22:34:36.298107 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-01 22:34:36.301042 | orchestrator | Sunday 01 June 2025 22:34:36 +0000 (0:00:00.835) 0:05:15.000 *********** 2025-06-01 22:34:42.243768 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:42.244122 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:42.245400 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:42.246280 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:42.246707 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:42.247355 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:42.248612 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:42.248767 | orchestrator | 2025-06-01 22:34:42.249644 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-01 22:34:42.250235 | orchestrator | Sunday 01 June 2025 22:34:42 +0000 (0:00:05.952) 0:05:20.953 *********** 2025-06-01 22:34:43.273886 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:43.274520 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:43.274561 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:43.275223 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:43.275720 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:43.276422 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:43.276679 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:43.277405 | orchestrator | 2025-06-01 22:34:43.277760 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-01 22:34:43.278147 | orchestrator | Sunday 01 June 2025 22:34:43 +0000 (0:00:01.031) 0:05:21.984 *********** 2025-06-01 22:34:51.177078 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:51.177901 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:51.177933 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:51.177947 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:51.178668 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:51.179134 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:51.180286 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:51.181682 | orchestrator | 2025-06-01 22:34:51.183190 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-01 22:34:51.185705 | orchestrator | Sunday 01 June 2025 22:34:51 +0000 (0:00:07.899) 0:05:29.883 *********** 2025-06-01 22:34:54.439025 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:54.439564 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:54.441140 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:54.444951 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:54.445756 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:54.447588 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:54.448590 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:54.449693 | orchestrator | 2025-06-01 22:34:54.450090 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-01 22:34:54.451335 | orchestrator | Sunday 01 June 2025 22:34:54 +0000 (0:00:03.264) 0:05:33.147 *********** 2025-06-01 22:34:56.041844 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:56.041945 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:56.042213 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:56.043276 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:56.043838 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:56.044869 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:56.045985 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:56.046637 | orchestrator | 2025-06-01 22:34:56.047675 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-01 22:34:56.047997 | orchestrator | Sunday 01 June 2025 22:34:56 +0000 (0:00:01.602) 0:05:34.750 *********** 2025-06-01 22:34:57.443465 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:57.444277 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:57.445312 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:57.447525 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:57.448507 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:57.449395 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:57.450127 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:57.451198 | orchestrator | 2025-06-01 22:34:57.451949 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-01 22:34:57.453200 | orchestrator | Sunday 01 June 2025 22:34:57 +0000 (0:00:01.403) 0:05:36.153 *********** 2025-06-01 22:34:57.654427 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:34:57.731291 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:34:57.806265 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:34:57.873501 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:34:58.124252 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:34:58.125564 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:34:58.126654 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:58.127384 | orchestrator | 2025-06-01 22:34:58.128106 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-01 22:34:58.128791 | orchestrator | Sunday 01 June 2025 22:34:58 +0000 (0:00:00.683) 0:05:36.837 *********** 2025-06-01 22:35:07.660425 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:07.661135 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:07.663131 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:07.664358 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:07.666496 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:07.668145 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:07.669029 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:07.673176 | orchestrator | 2025-06-01 22:35:07.673204 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-01 22:35:07.673219 | orchestrator | Sunday 01 June 2025 22:35:07 +0000 (0:00:09.533) 0:05:46.370 *********** 2025-06-01 22:35:08.659963 | orchestrator | changed: [testbed-manager] 2025-06-01 22:35:08.660061 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:08.660809 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:08.661727 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:08.662708 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:08.664939 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:08.666103 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:08.666542 | orchestrator | 2025-06-01 22:35:08.667853 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-01 22:35:08.669098 | orchestrator | Sunday 01 June 2025 22:35:08 +0000 (0:00:00.999) 0:05:47.370 *********** 2025-06-01 22:35:17.435162 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:17.435695 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:17.436622 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:17.439275 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:17.439515 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:17.439898 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:17.440564 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:17.441050 | orchestrator | 2025-06-01 22:35:17.441829 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-01 22:35:17.442515 | orchestrator | Sunday 01 June 2025 22:35:17 +0000 (0:00:08.777) 0:05:56.147 *********** 2025-06-01 22:35:28.101125 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:28.101236 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:28.101254 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:28.101331 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:28.102929 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:28.103666 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:28.103990 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:28.104792 | orchestrator | 2025-06-01 22:35:28.105113 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-01 22:35:28.105571 | orchestrator | Sunday 01 June 2025 22:35:28 +0000 (0:00:10.658) 0:06:06.806 *********** 2025-06-01 22:35:28.519954 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-01 22:35:29.323510 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-01 22:35:29.325462 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-01 22:35:29.326175 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-01 22:35:29.327726 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-01 22:35:29.328270 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-01 22:35:29.329789 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-01 22:35:29.330559 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-01 22:35:29.330850 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-01 22:35:29.331670 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-01 22:35:29.333704 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-01 22:35:29.333727 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-01 22:35:29.334512 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-01 22:35:29.335438 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-01 22:35:29.336794 | orchestrator | 2025-06-01 22:35:29.337683 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-01 22:35:29.338949 | orchestrator | Sunday 01 June 2025 22:35:29 +0000 (0:00:01.228) 0:06:08.034 *********** 2025-06-01 22:35:29.534768 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:35:29.607200 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:35:29.674584 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:35:29.741772 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:35:29.867313 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:35:29.867919 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:35:29.868617 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:35:29.869325 | orchestrator | 2025-06-01 22:35:29.870072 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-01 22:35:29.871299 | orchestrator | Sunday 01 June 2025 22:35:29 +0000 (0:00:00.546) 0:06:08.581 *********** 2025-06-01 22:35:33.678106 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:33.678332 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:33.678357 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:33.678874 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:33.679271 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:33.680006 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:33.680330 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:33.680788 | orchestrator | 2025-06-01 22:35:33.681238 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-01 22:35:33.681530 | orchestrator | Sunday 01 June 2025 22:35:33 +0000 (0:00:03.807) 0:06:12.388 *********** 2025-06-01 22:35:33.831987 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:35:33.901963 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:35:33.970787 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:35:34.044750 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:35:34.110553 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:35:34.222731 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:35:34.222882 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:35:34.223589 | orchestrator | 2025-06-01 22:35:34.224082 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-01 22:35:34.225093 | orchestrator | Sunday 01 June 2025 22:35:34 +0000 (0:00:00.545) 0:06:12.934 *********** 2025-06-01 22:35:34.301615 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-01 22:35:34.301774 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-01 22:35:34.382675 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:35:34.383465 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-01 22:35:34.383508 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-01 22:35:34.467288 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:35:34.467731 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-01 22:35:34.468498 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-01 22:35:34.545586 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:35:34.546482 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-01 22:35:34.547449 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-01 22:35:34.617091 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:35:34.617269 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-01 22:35:34.617688 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-01 22:35:34.688625 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:35:34.689183 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-01 22:35:34.690573 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-01 22:35:34.809008 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:35:34.809121 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-01 22:35:34.809318 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-01 22:35:34.809816 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:35:34.810247 | orchestrator | 2025-06-01 22:35:34.811211 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-01 22:35:34.813800 | orchestrator | Sunday 01 June 2025 22:35:34 +0000 (0:00:00.587) 0:06:13.521 *********** 2025-06-01 22:35:34.944403 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:35:35.020571 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:35:35.086353 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:35:35.151608 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:35:35.223745 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:35:35.326100 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:35:35.326220 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:35:35.327043 | orchestrator | 2025-06-01 22:35:35.327686 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-01 22:35:35.328384 | orchestrator | Sunday 01 June 2025 22:35:35 +0000 (0:00:00.516) 0:06:14.038 *********** 2025-06-01 22:35:35.462873 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:35:35.527550 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:35:35.594605 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:35:35.666517 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:35:35.733778 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:35:35.839024 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:35:35.839117 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:35:35.839913 | orchestrator | 2025-06-01 22:35:35.840859 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-01 22:35:35.844772 | orchestrator | Sunday 01 June 2025 22:35:35 +0000 (0:00:00.512) 0:06:14.550 *********** 2025-06-01 22:35:35.978337 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:35:36.047367 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:35:36.306551 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:35:36.371990 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:35:36.437636 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:35:36.586747 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:35:36.587613 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:35:36.588656 | orchestrator | 2025-06-01 22:35:36.589481 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-01 22:35:36.593351 | orchestrator | Sunday 01 June 2025 22:35:36 +0000 (0:00:00.747) 0:06:15.298 *********** 2025-06-01 22:35:38.195876 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:38.196376 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:35:38.196508 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:35:38.197991 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:38.198967 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:35:38.200726 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:38.201571 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:38.202690 | orchestrator | 2025-06-01 22:35:38.203222 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-01 22:35:38.204175 | orchestrator | Sunday 01 June 2025 22:35:38 +0000 (0:00:01.604) 0:06:16.903 *********** 2025-06-01 22:35:39.099454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:35:39.099750 | orchestrator | 2025-06-01 22:35:39.100564 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-01 22:35:39.101236 | orchestrator | Sunday 01 June 2025 22:35:39 +0000 (0:00:00.907) 0:06:17.811 *********** 2025-06-01 22:35:39.950548 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:39.952137 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:39.953172 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:39.954805 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:39.955575 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:39.957159 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:39.957773 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:39.959254 | orchestrator | 2025-06-01 22:35:39.959603 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-01 22:35:39.960095 | orchestrator | Sunday 01 June 2025 22:35:39 +0000 (0:00:00.850) 0:06:18.661 *********** 2025-06-01 22:35:40.360934 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:40.511689 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:41.020053 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:41.020449 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:41.021251 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:41.022547 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:41.024092 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:41.025879 | orchestrator | 2025-06-01 22:35:41.026865 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-01 22:35:41.028095 | orchestrator | Sunday 01 June 2025 22:35:41 +0000 (0:00:01.070) 0:06:19.731 *********** 2025-06-01 22:35:42.346622 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:42.347052 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:42.350834 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:42.352180 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:42.352713 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:42.355026 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:42.355851 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:42.357065 | orchestrator | 2025-06-01 22:35:42.357895 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-01 22:35:42.358518 | orchestrator | Sunday 01 June 2025 22:35:42 +0000 (0:00:01.325) 0:06:21.057 *********** 2025-06-01 22:35:42.472048 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:35:43.717987 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:35:43.719177 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:35:43.721879 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:35:43.722953 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:43.723948 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:43.724940 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:43.725760 | orchestrator | 2025-06-01 22:35:43.726697 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-01 22:35:43.727640 | orchestrator | Sunday 01 June 2025 22:35:43 +0000 (0:00:01.370) 0:06:22.427 *********** 2025-06-01 22:35:44.995509 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:44.995689 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:44.996433 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:44.999136 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:44.999951 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:45.001079 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:45.001998 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:45.003281 | orchestrator | 2025-06-01 22:35:45.004100 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-01 22:35:45.004741 | orchestrator | Sunday 01 June 2025 22:35:44 +0000 (0:00:01.276) 0:06:23.704 *********** 2025-06-01 22:35:46.546651 | orchestrator | changed: [testbed-manager] 2025-06-01 22:35:46.548462 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:46.551224 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:46.552933 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:46.553973 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:46.554757 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:46.556110 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:46.557173 | orchestrator | 2025-06-01 22:35:46.557930 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-01 22:35:46.558506 | orchestrator | Sunday 01 June 2025 22:35:46 +0000 (0:00:01.552) 0:06:25.257 *********** 2025-06-01 22:35:47.464540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:35:47.464712 | orchestrator | 2025-06-01 22:35:47.465834 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-01 22:35:47.466730 | orchestrator | Sunday 01 June 2025 22:35:47 +0000 (0:00:00.918) 0:06:26.175 *********** 2025-06-01 22:35:48.840182 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:35:48.840273 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:48.841542 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:35:48.843536 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:35:48.845800 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:48.846168 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:48.847228 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:48.847598 | orchestrator | 2025-06-01 22:35:48.848247 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-01 22:35:48.849321 | orchestrator | Sunday 01 June 2025 22:35:48 +0000 (0:00:01.375) 0:06:27.551 *********** 2025-06-01 22:35:50.061139 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:50.062257 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:35:50.062463 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:35:50.063327 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:35:50.065505 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:50.067143 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:50.068108 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:50.069579 | orchestrator | 2025-06-01 22:35:50.070874 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-01 22:35:50.072424 | orchestrator | Sunday 01 June 2025 22:35:50 +0000 (0:00:01.219) 0:06:28.771 *********** 2025-06-01 22:35:51.404026 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:51.404963 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:35:51.406159 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:35:51.407361 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:35:51.409590 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:51.410090 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:51.411123 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:51.412437 | orchestrator | 2025-06-01 22:35:51.414547 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-01 22:35:51.417528 | orchestrator | Sunday 01 June 2025 22:35:51 +0000 (0:00:01.342) 0:06:30.114 *********** 2025-06-01 22:35:52.698423 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:52.699137 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:35:52.701126 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:35:52.701199 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:35:52.702277 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:52.703669 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:52.704575 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:52.705411 | orchestrator | 2025-06-01 22:35:52.706202 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-01 22:35:52.707000 | orchestrator | Sunday 01 June 2025 22:35:52 +0000 (0:00:01.294) 0:06:31.408 *********** 2025-06-01 22:35:53.899570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:35:53.899795 | orchestrator | 2025-06-01 22:35:53.901183 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:35:53.902709 | orchestrator | Sunday 01 June 2025 22:35:53 +0000 (0:00:00.911) 0:06:32.319 *********** 2025-06-01 22:35:53.903910 | orchestrator | 2025-06-01 22:35:53.905310 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:35:53.906484 | orchestrator | Sunday 01 June 2025 22:35:53 +0000 (0:00:00.039) 0:06:32.359 *********** 2025-06-01 22:35:53.907923 | orchestrator | 2025-06-01 22:35:53.909038 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:35:53.910466 | orchestrator | Sunday 01 June 2025 22:35:53 +0000 (0:00:00.048) 0:06:32.407 *********** 2025-06-01 22:35:53.910696 | orchestrator | 2025-06-01 22:35:53.911814 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:35:53.912822 | orchestrator | Sunday 01 June 2025 22:35:53 +0000 (0:00:00.039) 0:06:32.447 *********** 2025-06-01 22:35:53.913431 | orchestrator | 2025-06-01 22:35:53.915045 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:35:53.915830 | orchestrator | Sunday 01 June 2025 22:35:53 +0000 (0:00:00.038) 0:06:32.485 *********** 2025-06-01 22:35:53.916595 | orchestrator | 2025-06-01 22:35:53.917621 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:35:53.918301 | orchestrator | Sunday 01 June 2025 22:35:53 +0000 (0:00:00.045) 0:06:32.531 *********** 2025-06-01 22:35:53.918967 | orchestrator | 2025-06-01 22:35:53.919825 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:35:53.920751 | orchestrator | Sunday 01 June 2025 22:35:53 +0000 (0:00:00.038) 0:06:32.569 *********** 2025-06-01 22:35:53.921080 | orchestrator | 2025-06-01 22:35:53.922085 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-01 22:35:53.922725 | orchestrator | Sunday 01 June 2025 22:35:53 +0000 (0:00:00.038) 0:06:32.608 *********** 2025-06-01 22:35:55.262436 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:55.263262 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:55.264509 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:55.266528 | orchestrator | 2025-06-01 22:35:55.268198 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-01 22:35:55.268633 | orchestrator | Sunday 01 June 2025 22:35:55 +0000 (0:00:01.362) 0:06:33.971 *********** 2025-06-01 22:35:56.758318 | orchestrator | changed: [testbed-manager] 2025-06-01 22:35:56.759136 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:56.760510 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:56.762267 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:56.763148 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:56.764503 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:56.765130 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:56.765771 | orchestrator | 2025-06-01 22:35:56.766616 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-01 22:35:56.767309 | orchestrator | Sunday 01 June 2025 22:35:56 +0000 (0:00:01.496) 0:06:35.467 *********** 2025-06-01 22:35:57.892125 | orchestrator | changed: [testbed-manager] 2025-06-01 22:35:57.893959 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:57.895825 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:57.896925 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:57.898198 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:57.899225 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:57.900996 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:57.901891 | orchestrator | 2025-06-01 22:35:57.903494 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-01 22:35:57.904745 | orchestrator | Sunday 01 June 2025 22:35:57 +0000 (0:00:01.135) 0:06:36.602 *********** 2025-06-01 22:35:58.026099 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:36:00.055344 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:00.055869 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:00.057587 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:00.058539 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:00.058951 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:00.059986 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:00.060696 | orchestrator | 2025-06-01 22:36:00.061112 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-01 22:36:00.062254 | orchestrator | Sunday 01 June 2025 22:36:00 +0000 (0:00:02.161) 0:06:38.763 *********** 2025-06-01 22:36:00.151530 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:00.151720 | orchestrator | 2025-06-01 22:36:00.152599 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-01 22:36:00.153862 | orchestrator | Sunday 01 June 2025 22:36:00 +0000 (0:00:00.101) 0:06:38.864 *********** 2025-06-01 22:36:01.155331 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:01.157450 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:01.157763 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:01.158756 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:01.158779 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:01.159303 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:01.160232 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:01.162645 | orchestrator | 2025-06-01 22:36:01.162831 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-01 22:36:01.163093 | orchestrator | Sunday 01 June 2025 22:36:01 +0000 (0:00:01.000) 0:06:39.865 *********** 2025-06-01 22:36:01.489847 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:36:01.562297 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:01.627222 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:01.703323 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:01.770350 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:36:01.898578 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:36:01.899910 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:36:01.900653 | orchestrator | 2025-06-01 22:36:01.901462 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-01 22:36:01.902148 | orchestrator | Sunday 01 June 2025 22:36:01 +0000 (0:00:00.745) 0:06:40.610 *********** 2025-06-01 22:36:02.829926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:36:02.830994 | orchestrator | 2025-06-01 22:36:02.832537 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-01 22:36:02.833694 | orchestrator | Sunday 01 June 2025 22:36:02 +0000 (0:00:00.929) 0:06:41.540 *********** 2025-06-01 22:36:03.671262 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:03.672316 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:03.675631 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:03.675724 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:03.676486 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:03.677342 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:03.678152 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:03.679075 | orchestrator | 2025-06-01 22:36:03.680734 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-01 22:36:03.680755 | orchestrator | Sunday 01 June 2025 22:36:03 +0000 (0:00:00.841) 0:06:42.382 *********** 2025-06-01 22:36:06.247262 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-01 22:36:06.247550 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-01 22:36:06.248189 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-01 22:36:06.252214 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-01 22:36:06.252616 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-01 22:36:06.253548 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-01 22:36:06.253947 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-01 22:36:06.254751 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-01 22:36:06.255175 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-01 22:36:06.255972 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-01 22:36:06.256185 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-01 22:36:06.257280 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-01 22:36:06.257928 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-01 22:36:06.258381 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-01 22:36:06.260318 | orchestrator | 2025-06-01 22:36:06.260807 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-01 22:36:06.261273 | orchestrator | Sunday 01 June 2025 22:36:06 +0000 (0:00:02.574) 0:06:44.957 *********** 2025-06-01 22:36:06.404580 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:36:06.472459 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:06.546468 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:06.624303 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:06.693204 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:36:06.809572 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:36:06.810933 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:36:06.811845 | orchestrator | 2025-06-01 22:36:06.813131 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-01 22:36:06.814014 | orchestrator | Sunday 01 June 2025 22:36:06 +0000 (0:00:00.565) 0:06:45.523 *********** 2025-06-01 22:36:07.664066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:36:07.664180 | orchestrator | 2025-06-01 22:36:07.665555 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-01 22:36:07.666839 | orchestrator | Sunday 01 June 2025 22:36:07 +0000 (0:00:00.850) 0:06:46.373 *********** 2025-06-01 22:36:08.261252 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:08.332315 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:08.766989 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:08.767209 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:08.768795 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:08.770002 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:08.771098 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:08.772223 | orchestrator | 2025-06-01 22:36:08.773222 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-01 22:36:08.774162 | orchestrator | Sunday 01 June 2025 22:36:08 +0000 (0:00:01.104) 0:06:47.478 *********** 2025-06-01 22:36:09.200923 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:09.585083 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:09.585662 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:09.586864 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:09.587859 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:09.588910 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:09.590146 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:09.591560 | orchestrator | 2025-06-01 22:36:09.591816 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-01 22:36:09.592500 | orchestrator | Sunday 01 June 2025 22:36:09 +0000 (0:00:00.816) 0:06:48.294 *********** 2025-06-01 22:36:09.728978 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:36:09.801042 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:09.867028 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:09.939919 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:10.020562 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:36:10.118784 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:36:10.119802 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:36:10.120567 | orchestrator | 2025-06-01 22:36:10.121683 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-01 22:36:10.125570 | orchestrator | Sunday 01 June 2025 22:36:10 +0000 (0:00:00.535) 0:06:48.830 *********** 2025-06-01 22:36:11.510887 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:11.515583 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:11.515627 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:11.515762 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:11.515780 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:11.515791 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:11.516191 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:11.516578 | orchestrator | 2025-06-01 22:36:11.516916 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-01 22:36:11.517100 | orchestrator | Sunday 01 June 2025 22:36:11 +0000 (0:00:01.391) 0:06:50.222 *********** 2025-06-01 22:36:11.645334 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:36:11.716735 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:11.793843 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:11.859398 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:11.931961 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:36:12.035900 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:36:12.036149 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:36:12.037516 | orchestrator | 2025-06-01 22:36:12.037929 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-01 22:36:12.038503 | orchestrator | Sunday 01 June 2025 22:36:12 +0000 (0:00:00.525) 0:06:50.747 *********** 2025-06-01 22:36:19.919140 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:19.919910 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:19.920885 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:19.921973 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:19.923625 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:19.923667 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:19.924235 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:19.924924 | orchestrator | 2025-06-01 22:36:19.926938 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-01 22:36:19.927463 | orchestrator | Sunday 01 June 2025 22:36:19 +0000 (0:00:07.879) 0:06:58.627 *********** 2025-06-01 22:36:21.230395 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:21.233945 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:21.233979 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:21.236709 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:21.236734 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:21.237037 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:21.238747 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:21.239614 | orchestrator | 2025-06-01 22:36:21.240665 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-01 22:36:21.241505 | orchestrator | Sunday 01 June 2025 22:36:21 +0000 (0:00:01.312) 0:06:59.940 *********** 2025-06-01 22:36:22.924850 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:22.925431 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:22.931271 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:22.932760 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:22.934731 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:22.938137 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:22.939005 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:22.939635 | orchestrator | 2025-06-01 22:36:22.940257 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-01 22:36:22.941033 | orchestrator | Sunday 01 June 2025 22:36:22 +0000 (0:00:01.694) 0:07:01.634 *********** 2025-06-01 22:36:24.601733 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:24.602483 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:24.604071 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:24.606514 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:24.608073 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:24.608443 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:24.609609 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:24.610281 | orchestrator | 2025-06-01 22:36:24.612163 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-01 22:36:24.612845 | orchestrator | Sunday 01 June 2025 22:36:24 +0000 (0:00:01.676) 0:07:03.311 *********** 2025-06-01 22:36:25.026105 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:25.666584 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:25.666792 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:25.669100 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:25.670494 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:25.671565 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:25.672453 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:25.673507 | orchestrator | 2025-06-01 22:36:25.674506 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-01 22:36:25.675375 | orchestrator | Sunday 01 June 2025 22:36:25 +0000 (0:00:01.065) 0:07:04.377 *********** 2025-06-01 22:36:25.798578 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:36:25.870407 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:25.937217 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:26.016433 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:26.089151 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:36:26.493688 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:36:26.495023 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:36:26.496298 | orchestrator | 2025-06-01 22:36:26.497114 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-01 22:36:26.498383 | orchestrator | Sunday 01 June 2025 22:36:26 +0000 (0:00:00.826) 0:07:05.204 *********** 2025-06-01 22:36:26.633976 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:36:26.699203 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:26.775594 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:26.838100 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:26.904543 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:36:27.007395 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:36:27.008714 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:36:27.009669 | orchestrator | 2025-06-01 22:36:27.011307 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-01 22:36:27.012759 | orchestrator | Sunday 01 June 2025 22:36:27 +0000 (0:00:00.515) 0:07:05.720 *********** 2025-06-01 22:36:27.138787 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:27.208609 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:27.272446 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:27.342765 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:27.601978 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:27.750796 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:27.753175 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:27.754627 | orchestrator | 2025-06-01 22:36:27.755503 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-01 22:36:27.761126 | orchestrator | Sunday 01 June 2025 22:36:27 +0000 (0:00:00.737) 0:07:06.457 *********** 2025-06-01 22:36:27.899497 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:27.964660 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:28.031810 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:28.104522 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:28.169884 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:28.276477 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:28.276763 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:28.278497 | orchestrator | 2025-06-01 22:36:28.279531 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-01 22:36:28.280231 | orchestrator | Sunday 01 June 2025 22:36:28 +0000 (0:00:00.529) 0:07:06.987 *********** 2025-06-01 22:36:28.413914 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:28.479573 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:28.551617 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:28.618161 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:28.683795 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:28.799089 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:28.799830 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:28.801621 | orchestrator | 2025-06-01 22:36:28.803107 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-01 22:36:28.803592 | orchestrator | Sunday 01 June 2025 22:36:28 +0000 (0:00:00.524) 0:07:07.511 *********** 2025-06-01 22:36:34.375392 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:34.376693 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:34.379150 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:34.380840 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:34.383067 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:34.384468 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:34.385208 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:34.386084 | orchestrator | 2025-06-01 22:36:34.387637 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-01 22:36:34.388813 | orchestrator | Sunday 01 June 2025 22:36:34 +0000 (0:00:05.574) 0:07:13.086 *********** 2025-06-01 22:36:34.581793 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:36:34.649552 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:34.725689 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:34.788377 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:34.901381 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:36:34.902692 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:36:34.904444 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:36:34.905858 | orchestrator | 2025-06-01 22:36:34.907122 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-01 22:36:34.908052 | orchestrator | Sunday 01 June 2025 22:36:34 +0000 (0:00:00.526) 0:07:13.612 *********** 2025-06-01 22:36:35.989201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:36:35.989429 | orchestrator | 2025-06-01 22:36:35.992779 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-01 22:36:35.992804 | orchestrator | Sunday 01 June 2025 22:36:35 +0000 (0:00:01.087) 0:07:14.700 *********** 2025-06-01 22:36:37.717921 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:37.718060 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:37.718089 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:37.718842 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:37.719962 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:37.722614 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:37.723055 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:37.724553 | orchestrator | 2025-06-01 22:36:37.725134 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-01 22:36:37.725580 | orchestrator | Sunday 01 June 2025 22:36:37 +0000 (0:00:01.725) 0:07:16.425 *********** 2025-06-01 22:36:38.875631 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:38.877046 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:38.878331 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:38.879161 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:38.880198 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:38.881038 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:38.881947 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:38.882733 | orchestrator | 2025-06-01 22:36:38.883692 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-01 22:36:38.884505 | orchestrator | Sunday 01 June 2025 22:36:38 +0000 (0:00:01.161) 0:07:17.587 *********** 2025-06-01 22:36:39.523081 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:39.955894 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:39.957126 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:39.959282 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:39.959669 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:39.961495 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:39.962666 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:39.963773 | orchestrator | 2025-06-01 22:36:39.964716 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-01 22:36:39.965650 | orchestrator | Sunday 01 June 2025 22:36:39 +0000 (0:00:01.078) 0:07:18.665 *********** 2025-06-01 22:36:41.624492 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:36:41.625008 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:36:41.629146 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:36:41.629179 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:36:41.629191 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:36:41.629286 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:36:41.630459 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:36:41.631824 | orchestrator | 2025-06-01 22:36:41.632591 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-01 22:36:41.633657 | orchestrator | Sunday 01 June 2025 22:36:41 +0000 (0:00:01.668) 0:07:20.334 *********** 2025-06-01 22:36:42.442538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:36:42.443387 | orchestrator | 2025-06-01 22:36:42.447216 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-01 22:36:42.447288 | orchestrator | Sunday 01 June 2025 22:36:42 +0000 (0:00:00.818) 0:07:21.152 *********** 2025-06-01 22:36:51.578448 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:51.578906 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:51.579977 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:51.580874 | orchestrator | changed: [testbed-manager] 2025-06-01 22:36:51.581514 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:51.582342 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:51.582771 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:51.584471 | orchestrator | 2025-06-01 22:36:51.584951 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-01 22:36:51.586115 | orchestrator | Sunday 01 June 2025 22:36:51 +0000 (0:00:09.136) 0:07:30.289 *********** 2025-06-01 22:36:53.393907 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:53.400051 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:53.402272 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:53.404142 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:53.405723 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:53.407231 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:53.408877 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:53.409646 | orchestrator | 2025-06-01 22:36:53.410622 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-01 22:36:53.411574 | orchestrator | Sunday 01 June 2025 22:36:53 +0000 (0:00:01.811) 0:07:32.101 *********** 2025-06-01 22:36:54.621469 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:54.623178 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:54.624524 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:54.625676 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:54.626629 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:54.627683 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:54.628679 | orchestrator | 2025-06-01 22:36:54.628929 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-01 22:36:54.630109 | orchestrator | Sunday 01 June 2025 22:36:54 +0000 (0:00:01.229) 0:07:33.331 *********** 2025-06-01 22:36:56.061253 | orchestrator | changed: [testbed-manager] 2025-06-01 22:36:56.063036 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:56.065496 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:56.067320 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:56.068223 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:56.069570 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:56.070211 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:56.071505 | orchestrator | 2025-06-01 22:36:56.072912 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-01 22:36:56.073750 | orchestrator | 2025-06-01 22:36:56.075623 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-01 22:36:56.076624 | orchestrator | Sunday 01 June 2025 22:36:56 +0000 (0:00:01.441) 0:07:34.772 *********** 2025-06-01 22:36:56.202956 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:36:56.266549 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:56.335442 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:56.404837 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:56.472378 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:36:56.588498 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:36:56.589862 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:36:56.590241 | orchestrator | 2025-06-01 22:36:56.594232 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-01 22:36:56.595693 | orchestrator | 2025-06-01 22:36:56.595731 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-01 22:36:56.597533 | orchestrator | Sunday 01 June 2025 22:36:56 +0000 (0:00:00.529) 0:07:35.302 *********** 2025-06-01 22:36:57.915474 | orchestrator | changed: [testbed-manager] 2025-06-01 22:36:57.918409 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:57.919985 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:57.920633 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:57.921596 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:57.922597 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:57.923034 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:57.923865 | orchestrator | 2025-06-01 22:36:57.924221 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-01 22:36:57.924675 | orchestrator | Sunday 01 June 2025 22:36:57 +0000 (0:00:01.322) 0:07:36.625 *********** 2025-06-01 22:36:59.335801 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:59.336262 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:59.337252 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:59.339752 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:59.340685 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:59.342409 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:59.344594 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:59.345690 | orchestrator | 2025-06-01 22:36:59.346912 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-01 22:36:59.347617 | orchestrator | Sunday 01 June 2025 22:36:59 +0000 (0:00:01.420) 0:07:38.045 *********** 2025-06-01 22:36:59.666964 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:36:59.730592 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:59.809580 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:59.884768 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:59.955646 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:37:00.388949 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:37:00.392461 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:37:00.393207 | orchestrator | 2025-06-01 22:37:00.394536 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-01 22:37:00.395589 | orchestrator | Sunday 01 June 2025 22:37:00 +0000 (0:00:01.056) 0:07:39.102 *********** 2025-06-01 22:37:01.588015 | orchestrator | changed: [testbed-manager] 2025-06-01 22:37:01.588115 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:37:01.588230 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:37:01.590093 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:37:01.590117 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:37:01.592293 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:37:01.593406 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:37:01.593777 | orchestrator | 2025-06-01 22:37:01.594793 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-01 22:37:01.595404 | orchestrator | 2025-06-01 22:37:01.595807 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-01 22:37:01.596216 | orchestrator | Sunday 01 June 2025 22:37:01 +0000 (0:00:01.196) 0:07:40.299 *********** 2025-06-01 22:37:02.607908 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:37:02.608563 | orchestrator | 2025-06-01 22:37:02.609651 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-01 22:37:02.612763 | orchestrator | Sunday 01 June 2025 22:37:02 +0000 (0:00:01.020) 0:07:41.319 *********** 2025-06-01 22:37:03.037560 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:03.434713 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:03.435594 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:03.436489 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:03.437447 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:03.438287 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:03.440461 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:03.440485 | orchestrator | 2025-06-01 22:37:03.440758 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-01 22:37:03.441312 | orchestrator | Sunday 01 June 2025 22:37:03 +0000 (0:00:00.826) 0:07:42.146 *********** 2025-06-01 22:37:04.587605 | orchestrator | changed: [testbed-manager] 2025-06-01 22:37:04.588819 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:37:04.589862 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:37:04.590906 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:37:04.592124 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:37:04.592728 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:37:04.593216 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:37:04.593905 | orchestrator | 2025-06-01 22:37:04.594967 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-01 22:37:04.595700 | orchestrator | Sunday 01 June 2025 22:37:04 +0000 (0:00:01.147) 0:07:43.294 *********** 2025-06-01 22:37:05.610355 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:37:05.610451 | orchestrator | 2025-06-01 22:37:05.611925 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-01 22:37:05.615547 | orchestrator | Sunday 01 June 2025 22:37:05 +0000 (0:00:01.027) 0:07:44.321 *********** 2025-06-01 22:37:06.439090 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:06.439513 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:06.440740 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:06.441867 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:06.442807 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:06.443958 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:06.444347 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:06.445207 | orchestrator | 2025-06-01 22:37:06.445747 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-01 22:37:06.446775 | orchestrator | Sunday 01 June 2025 22:37:06 +0000 (0:00:00.826) 0:07:45.148 *********** 2025-06-01 22:37:06.853541 | orchestrator | changed: [testbed-manager] 2025-06-01 22:37:07.558366 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:37:07.559039 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:37:07.560215 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:37:07.561768 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:37:07.561904 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:37:07.562815 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:37:07.563627 | orchestrator | 2025-06-01 22:37:07.564169 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:37:07.565093 | orchestrator | 2025-06-01 22:37:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:37:07.565124 | orchestrator | 2025-06-01 22:37:07 | INFO  | Please wait and do not abort execution. 2025-06-01 22:37:07.566295 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-01 22:37:07.566330 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 22:37:07.566798 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 22:37:07.567925 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 22:37:07.568349 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-01 22:37:07.568564 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 22:37:07.568997 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 22:37:07.569443 | orchestrator | 2025-06-01 22:37:07.569834 | orchestrator | 2025-06-01 22:37:07.570460 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:37:07.570704 | orchestrator | Sunday 01 June 2025 22:37:07 +0000 (0:00:01.122) 0:07:46.270 *********** 2025-06-01 22:37:07.571115 | orchestrator | =============================================================================== 2025-06-01 22:37:07.571653 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.39s 2025-06-01 22:37:07.572210 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.49s 2025-06-01 22:37:07.572554 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.46s 2025-06-01 22:37:07.572932 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.40s 2025-06-01 22:37:07.573791 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.13s 2025-06-01 22:37:07.573939 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.89s 2025-06-01 22:37:07.574422 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.66s 2025-06-01 22:37:07.574821 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.53s 2025-06-01 22:37:07.575004 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.14s 2025-06-01 22:37:07.576769 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.78s 2025-06-01 22:37:07.576792 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.26s 2025-06-01 22:37:07.578124 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.90s 2025-06-01 22:37:07.578149 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.88s 2025-06-01 22:37:07.578216 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.63s 2025-06-01 22:37:07.579119 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.49s 2025-06-01 22:37:07.579403 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.04s 2025-06-01 22:37:07.580368 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.95s 2025-06-01 22:37:07.580461 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.62s 2025-06-01 22:37:07.581138 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.57s 2025-06-01 22:37:07.581367 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.53s 2025-06-01 22:37:08.414788 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-01 22:37:08.414844 | orchestrator | + osism apply network 2025-06-01 22:37:10.742443 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:37:10.742578 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:37:10.742609 | orchestrator | Registering Redlock._release_script 2025-06-01 22:37:10.812202 | orchestrator | 2025-06-01 22:37:10 | INFO  | Task 9ae5beb0-887b-4b04-98a1-2d12e195f80d (network) was prepared for execution. 2025-06-01 22:37:10.812321 | orchestrator | 2025-06-01 22:37:10 | INFO  | It takes a moment until task 9ae5beb0-887b-4b04-98a1-2d12e195f80d (network) has been started and output is visible here. 2025-06-01 22:37:15.235812 | orchestrator | 2025-06-01 22:37:15.235987 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-01 22:37:15.239458 | orchestrator | 2025-06-01 22:37:15.239488 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-01 22:37:15.239501 | orchestrator | Sunday 01 June 2025 22:37:15 +0000 (0:00:00.298) 0:00:00.298 *********** 2025-06-01 22:37:15.389842 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:15.469214 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:15.559636 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:15.643942 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:15.840712 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:15.985840 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:15.986483 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:15.987448 | orchestrator | 2025-06-01 22:37:15.990591 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-01 22:37:15.990622 | orchestrator | Sunday 01 June 2025 22:37:15 +0000 (0:00:00.749) 0:00:01.048 *********** 2025-06-01 22:37:17.218794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:37:17.218961 | orchestrator | 2025-06-01 22:37:17.219661 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-01 22:37:17.220124 | orchestrator | Sunday 01 June 2025 22:37:17 +0000 (0:00:01.230) 0:00:02.278 *********** 2025-06-01 22:37:19.281477 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:19.282677 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:19.284675 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:19.284728 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:19.286536 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:19.287310 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:19.288013 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:19.290593 | orchestrator | 2025-06-01 22:37:19.290646 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-01 22:37:19.290661 | orchestrator | Sunday 01 June 2025 22:37:19 +0000 (0:00:02.065) 0:00:04.344 *********** 2025-06-01 22:37:21.080005 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:21.080209 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:21.082490 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:21.085181 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:21.085226 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:21.086097 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:21.087204 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:21.088295 | orchestrator | 2025-06-01 22:37:21.089225 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-01 22:37:21.090075 | orchestrator | Sunday 01 June 2025 22:37:21 +0000 (0:00:01.797) 0:00:06.141 *********** 2025-06-01 22:37:21.686405 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-01 22:37:21.689722 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-01 22:37:21.690903 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-01 22:37:22.115396 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-01 22:37:22.115573 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-01 22:37:22.116742 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-01 22:37:22.117000 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-01 22:37:22.117702 | orchestrator | 2025-06-01 22:37:22.119054 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-01 22:37:22.123439 | orchestrator | Sunday 01 June 2025 22:37:22 +0000 (0:00:01.038) 0:00:07.180 *********** 2025-06-01 22:37:25.587696 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 22:37:25.590403 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-01 22:37:25.593613 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-01 22:37:25.595868 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-01 22:37:25.596631 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 22:37:25.597901 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-01 22:37:25.598943 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 22:37:25.600084 | orchestrator | 2025-06-01 22:37:25.601221 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-01 22:37:25.602580 | orchestrator | Sunday 01 June 2025 22:37:25 +0000 (0:00:03.466) 0:00:10.647 *********** 2025-06-01 22:37:27.172680 | orchestrator | changed: [testbed-manager] 2025-06-01 22:37:27.174536 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:37:27.175909 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:37:27.177433 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:37:27.178964 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:37:27.178989 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:37:27.180553 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:37:27.183322 | orchestrator | 2025-06-01 22:37:27.183354 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-01 22:37:27.183368 | orchestrator | Sunday 01 June 2025 22:37:27 +0000 (0:00:01.588) 0:00:12.236 *********** 2025-06-01 22:37:29.177209 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 22:37:29.177353 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 22:37:29.177365 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-01 22:37:29.178112 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 22:37:29.179962 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-01 22:37:29.181706 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-01 22:37:29.183090 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-01 22:37:29.185001 | orchestrator | 2025-06-01 22:37:29.186089 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-01 22:37:29.186993 | orchestrator | Sunday 01 June 2025 22:37:29 +0000 (0:00:02.001) 0:00:14.237 *********** 2025-06-01 22:37:29.606082 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:29.906447 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:30.319781 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:30.321257 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:30.322544 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:30.323368 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:30.325312 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:30.325994 | orchestrator | 2025-06-01 22:37:30.327066 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-01 22:37:30.327717 | orchestrator | Sunday 01 June 2025 22:37:30 +0000 (0:00:01.142) 0:00:15.379 *********** 2025-06-01 22:37:30.504960 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:37:30.596159 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:37:30.699865 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:37:30.779494 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:37:30.862320 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:37:31.008525 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:37:31.008600 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:37:31.009491 | orchestrator | 2025-06-01 22:37:31.010833 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-01 22:37:31.011847 | orchestrator | Sunday 01 June 2025 22:37:31 +0000 (0:00:00.693) 0:00:16.072 *********** 2025-06-01 22:37:33.233818 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:33.237336 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:33.237446 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:33.237462 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:33.239427 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:33.241244 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:33.242334 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:33.243064 | orchestrator | 2025-06-01 22:37:33.244009 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-01 22:37:33.244788 | orchestrator | Sunday 01 June 2025 22:37:33 +0000 (0:00:02.221) 0:00:18.294 *********** 2025-06-01 22:37:33.491986 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:37:33.576197 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:37:33.661169 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:37:33.757377 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:37:34.125633 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:37:34.125914 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:37:34.127311 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-01 22:37:34.130296 | orchestrator | 2025-06-01 22:37:34.130341 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-01 22:37:34.130355 | orchestrator | Sunday 01 June 2025 22:37:34 +0000 (0:00:00.895) 0:00:19.190 *********** 2025-06-01 22:37:35.816900 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:35.817664 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:37:35.817712 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:37:35.817893 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:37:35.821909 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:37:35.823043 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:37:35.823057 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:37:35.823350 | orchestrator | 2025-06-01 22:37:35.823990 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-01 22:37:35.824423 | orchestrator | Sunday 01 June 2025 22:37:35 +0000 (0:00:01.686) 0:00:20.877 *********** 2025-06-01 22:37:37.105146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:37:37.105571 | orchestrator | 2025-06-01 22:37:37.106294 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-01 22:37:37.109886 | orchestrator | Sunday 01 June 2025 22:37:37 +0000 (0:00:01.289) 0:00:22.166 *********** 2025-06-01 22:37:37.694768 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:38.282948 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:38.283543 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:38.283821 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:38.284386 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:38.284742 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:38.285378 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:38.286289 | orchestrator | 2025-06-01 22:37:38.289338 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-01 22:37:38.289384 | orchestrator | Sunday 01 June 2025 22:37:38 +0000 (0:00:01.177) 0:00:23.344 *********** 2025-06-01 22:37:38.454853 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:38.540646 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:38.629423 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:38.719566 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:38.803110 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:38.961973 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:38.963083 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:38.964100 | orchestrator | 2025-06-01 22:37:38.965423 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-01 22:37:38.966493 | orchestrator | Sunday 01 June 2025 22:37:38 +0000 (0:00:00.676) 0:00:24.021 *********** 2025-06-01 22:37:39.394389 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:37:39.394928 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:37:39.718924 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:37:39.719659 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:37:39.720491 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:37:39.721706 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:37:39.722364 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:37:39.723355 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:37:39.724463 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:37:39.724789 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:37:40.236101 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:37:40.237034 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:37:40.239558 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:37:40.239663 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:37:40.239676 | orchestrator | 2025-06-01 22:37:40.239686 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-01 22:37:40.239744 | orchestrator | Sunday 01 June 2025 22:37:40 +0000 (0:00:01.274) 0:00:25.296 *********** 2025-06-01 22:37:40.396749 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:37:40.482394 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:37:40.569300 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:37:40.653769 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:37:40.734274 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:37:40.847184 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:37:40.847896 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:37:40.848628 | orchestrator | 2025-06-01 22:37:40.850842 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-01 22:37:40.854374 | orchestrator | Sunday 01 June 2025 22:37:40 +0000 (0:00:00.616) 0:00:25.913 *********** 2025-06-01 22:37:44.447952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2025-06-01 22:37:44.448198 | orchestrator | 2025-06-01 22:37:44.448491 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-01 22:37:44.451246 | orchestrator | Sunday 01 June 2025 22:37:44 +0000 (0:00:03.595) 0:00:29.509 *********** 2025-06-01 22:37:49.433566 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:49.433825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:49.434379 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:49.435448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:49.439505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:49.439530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:49.439542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:49.440355 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:49.441370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:49.442084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:49.442634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:49.443250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:49.443733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:49.444304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:49.444750 | orchestrator | 2025-06-01 22:37:49.445498 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-01 22:37:49.445863 | orchestrator | Sunday 01 June 2025 22:37:49 +0000 (0:00:04.983) 0:00:34.492 *********** 2025-06-01 22:37:54.344893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:54.346776 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:54.350623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:54.352027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:54.354421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:54.355875 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:54.356717 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:54.357999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:54.359078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:37:54.359600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:54.361139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:54.361414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:54.362450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:54.363637 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:37:54.364495 | orchestrator | 2025-06-01 22:37:54.365324 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-01 22:37:54.365673 | orchestrator | Sunday 01 June 2025 22:37:54 +0000 (0:00:04.914) 0:00:39.407 *********** 2025-06-01 22:37:55.608809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:37:55.608999 | orchestrator | 2025-06-01 22:37:55.609445 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-01 22:37:55.609971 | orchestrator | Sunday 01 June 2025 22:37:55 +0000 (0:00:01.263) 0:00:40.670 *********** 2025-06-01 22:37:56.077067 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:56.354922 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:56.793825 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:56.795633 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:56.797203 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:56.797992 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:56.799413 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:56.800856 | orchestrator | 2025-06-01 22:37:56.801821 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-01 22:37:56.802631 | orchestrator | Sunday 01 June 2025 22:37:56 +0000 (0:00:01.186) 0:00:41.857 *********** 2025-06-01 22:37:56.930867 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:37:56.932305 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:37:56.933948 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:37:56.934771 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:37:57.030654 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:37:57.032404 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:37:57.034105 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:37:57.034712 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:37:57.035404 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:37:57.130108 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:37:57.130473 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:37:57.130770 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:37:57.131406 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:37:57.245941 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:37:57.246287 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:37:57.247555 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:37:57.248796 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:37:57.250012 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:37:57.355366 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:37:57.356427 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:37:57.357827 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:37:57.359449 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:37:57.360374 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:37:57.656486 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:37:57.657312 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:37:57.658137 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:37:57.659747 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:37:57.661143 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:37:58.965938 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:37:58.967145 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:37:58.968596 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:37:58.969636 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:37:58.970704 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:37:58.971886 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:37:58.972895 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:37:58.973771 | orchestrator | 2025-06-01 22:37:58.974409 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-01 22:37:58.975459 | orchestrator | Sunday 01 June 2025 22:37:58 +0000 (0:00:02.168) 0:00:44.025 *********** 2025-06-01 22:37:59.133679 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:37:59.214819 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:37:59.296413 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:37:59.382110 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:37:59.469779 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:37:59.642148 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:37:59.642276 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:37:59.642393 | orchestrator | 2025-06-01 22:37:59.642785 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-01 22:37:59.643248 | orchestrator | Sunday 01 June 2025 22:37:59 +0000 (0:00:00.682) 0:00:44.707 *********** 2025-06-01 22:37:59.814410 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:38:00.099137 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:38:00.194118 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:38:00.282778 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:38:00.376353 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:38:00.430265 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:38:00.430639 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:38:00.431125 | orchestrator | 2025-06-01 22:38:00.432063 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:38:00.432661 | orchestrator | 2025-06-01 22:38:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:38:00.433049 | orchestrator | 2025-06-01 22:38:00 | INFO  | Please wait and do not abort execution. 2025-06-01 22:38:00.434088 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:38:00.435046 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 22:38:00.435529 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 22:38:00.436390 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 22:38:00.437386 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 22:38:00.437770 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 22:38:00.438794 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 22:38:00.439314 | orchestrator | 2025-06-01 22:38:00.440101 | orchestrator | 2025-06-01 22:38:00.440276 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:38:00.440701 | orchestrator | Sunday 01 June 2025 22:38:00 +0000 (0:00:00.787) 0:00:45.495 *********** 2025-06-01 22:38:00.441255 | orchestrator | =============================================================================== 2025-06-01 22:38:00.441612 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.98s 2025-06-01 22:38:00.442315 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.91s 2025-06-01 22:38:00.442796 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.60s 2025-06-01 22:38:00.443238 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.47s 2025-06-01 22:38:00.443575 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.22s 2025-06-01 22:38:00.443959 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.17s 2025-06-01 22:38:00.444256 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.07s 2025-06-01 22:38:00.444574 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.00s 2025-06-01 22:38:00.444910 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.80s 2025-06-01 22:38:00.445488 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.69s 2025-06-01 22:38:00.445933 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.59s 2025-06-01 22:38:00.446330 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2025-06-01 22:38:00.446667 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.27s 2025-06-01 22:38:00.447037 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.26s 2025-06-01 22:38:00.447405 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.23s 2025-06-01 22:38:00.447582 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.19s 2025-06-01 22:38:00.447949 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2025-06-01 22:38:00.448339 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2025-06-01 22:38:00.448570 | orchestrator | osism.commons.network : Create required directories --------------------- 1.04s 2025-06-01 22:38:00.448764 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.90s 2025-06-01 22:38:01.111322 | orchestrator | + osism apply wireguard 2025-06-01 22:38:02.785999 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:38:02.786144 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:38:02.786158 | orchestrator | Registering Redlock._release_script 2025-06-01 22:38:02.848908 | orchestrator | 2025-06-01 22:38:02 | INFO  | Task 7b28bd5a-f6cb-4896-9da5-d02796a3f5b0 (wireguard) was prepared for execution. 2025-06-01 22:38:02.849000 | orchestrator | 2025-06-01 22:38:02 | INFO  | It takes a moment until task 7b28bd5a-f6cb-4896-9da5-d02796a3f5b0 (wireguard) has been started and output is visible here. 2025-06-01 22:38:06.982004 | orchestrator | 2025-06-01 22:38:06.982194 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-01 22:38:06.983694 | orchestrator | 2025-06-01 22:38:06.985914 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-01 22:38:06.987318 | orchestrator | Sunday 01 June 2025 22:38:06 +0000 (0:00:00.246) 0:00:00.246 *********** 2025-06-01 22:38:08.572911 | orchestrator | ok: [testbed-manager] 2025-06-01 22:38:08.573107 | orchestrator | 2025-06-01 22:38:08.574213 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-01 22:38:08.575434 | orchestrator | Sunday 01 June 2025 22:38:08 +0000 (0:00:01.594) 0:00:01.841 *********** 2025-06-01 22:38:15.192475 | orchestrator | changed: [testbed-manager] 2025-06-01 22:38:15.192605 | orchestrator | 2025-06-01 22:38:15.193764 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-01 22:38:15.194239 | orchestrator | Sunday 01 June 2025 22:38:15 +0000 (0:00:06.619) 0:00:08.460 *********** 2025-06-01 22:38:15.735826 | orchestrator | changed: [testbed-manager] 2025-06-01 22:38:15.736602 | orchestrator | 2025-06-01 22:38:15.737415 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-01 22:38:15.738437 | orchestrator | Sunday 01 June 2025 22:38:15 +0000 (0:00:00.544) 0:00:09.005 *********** 2025-06-01 22:38:16.193439 | orchestrator | changed: [testbed-manager] 2025-06-01 22:38:16.195472 | orchestrator | 2025-06-01 22:38:16.197515 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-01 22:38:16.199050 | orchestrator | Sunday 01 June 2025 22:38:16 +0000 (0:00:00.456) 0:00:09.462 *********** 2025-06-01 22:38:16.731987 | orchestrator | ok: [testbed-manager] 2025-06-01 22:38:16.732932 | orchestrator | 2025-06-01 22:38:16.734443 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-01 22:38:16.734972 | orchestrator | Sunday 01 June 2025 22:38:16 +0000 (0:00:00.540) 0:00:10.002 *********** 2025-06-01 22:38:17.304729 | orchestrator | ok: [testbed-manager] 2025-06-01 22:38:17.305039 | orchestrator | 2025-06-01 22:38:17.306477 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-01 22:38:17.306768 | orchestrator | Sunday 01 June 2025 22:38:17 +0000 (0:00:00.570) 0:00:10.573 *********** 2025-06-01 22:38:17.748210 | orchestrator | ok: [testbed-manager] 2025-06-01 22:38:17.749054 | orchestrator | 2025-06-01 22:38:17.750879 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-01 22:38:17.750923 | orchestrator | Sunday 01 June 2025 22:38:17 +0000 (0:00:00.444) 0:00:11.017 *********** 2025-06-01 22:38:18.979732 | orchestrator | changed: [testbed-manager] 2025-06-01 22:38:18.980218 | orchestrator | 2025-06-01 22:38:18.980633 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-01 22:38:18.982146 | orchestrator | Sunday 01 June 2025 22:38:18 +0000 (0:00:01.231) 0:00:12.249 *********** 2025-06-01 22:38:19.919644 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 22:38:19.919776 | orchestrator | changed: [testbed-manager] 2025-06-01 22:38:19.920113 | orchestrator | 2025-06-01 22:38:19.920786 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-01 22:38:19.921663 | orchestrator | Sunday 01 June 2025 22:38:19 +0000 (0:00:00.937) 0:00:13.186 *********** 2025-06-01 22:38:21.602122 | orchestrator | changed: [testbed-manager] 2025-06-01 22:38:21.603547 | orchestrator | 2025-06-01 22:38:21.604512 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-01 22:38:21.606472 | orchestrator | Sunday 01 June 2025 22:38:21 +0000 (0:00:01.683) 0:00:14.869 *********** 2025-06-01 22:38:22.539387 | orchestrator | changed: [testbed-manager] 2025-06-01 22:38:22.539965 | orchestrator | 2025-06-01 22:38:22.541495 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:38:22.541806 | orchestrator | 2025-06-01 22:38:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:38:22.542115 | orchestrator | 2025-06-01 22:38:22 | INFO  | Please wait and do not abort execution. 2025-06-01 22:38:22.543036 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:38:22.543855 | orchestrator | 2025-06-01 22:38:22.544450 | orchestrator | 2025-06-01 22:38:22.545450 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:38:22.545708 | orchestrator | Sunday 01 June 2025 22:38:22 +0000 (0:00:00.939) 0:00:15.809 *********** 2025-06-01 22:38:22.546006 | orchestrator | =============================================================================== 2025-06-01 22:38:22.546219 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.62s 2025-06-01 22:38:22.546619 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.68s 2025-06-01 22:38:22.546993 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.59s 2025-06-01 22:38:22.547209 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.23s 2025-06-01 22:38:22.547532 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-06-01 22:38:22.547763 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2025-06-01 22:38:22.548062 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.57s 2025-06-01 22:38:22.548331 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-06-01 22:38:22.548562 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2025-06-01 22:38:22.548795 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2025-06-01 22:38:22.548880 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-06-01 22:38:23.140989 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-01 22:38:23.186257 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-01 22:38:23.186321 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-01 22:38:23.259979 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 189 0 --:--:-- --:--:-- --:--:-- 191 2025-06-01 22:38:23.275722 | orchestrator | + osism apply --environment custom workarounds 2025-06-01 22:38:24.958690 | orchestrator | 2025-06-01 22:38:24 | INFO  | Trying to run play workarounds in environment custom 2025-06-01 22:38:24.963687 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:38:24.963850 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:38:24.963869 | orchestrator | Registering Redlock._release_script 2025-06-01 22:38:25.027335 | orchestrator | 2025-06-01 22:38:25 | INFO  | Task 5606ec7b-0f87-4ded-b256-62c742f678ab (workarounds) was prepared for execution. 2025-06-01 22:38:25.027425 | orchestrator | 2025-06-01 22:38:25 | INFO  | It takes a moment until task 5606ec7b-0f87-4ded-b256-62c742f678ab (workarounds) has been started and output is visible here. 2025-06-01 22:38:29.057864 | orchestrator | 2025-06-01 22:38:29.058115 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 22:38:29.059542 | orchestrator | 2025-06-01 22:38:29.062617 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-01 22:38:29.063555 | orchestrator | Sunday 01 June 2025 22:38:29 +0000 (0:00:00.147) 0:00:00.147 *********** 2025-06-01 22:38:29.227666 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-01 22:38:29.305523 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-01 22:38:29.387205 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-01 22:38:29.468878 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-01 22:38:29.667583 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-01 22:38:29.831730 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-01 22:38:29.832972 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-01 22:38:29.834002 | orchestrator | 2025-06-01 22:38:29.835208 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-01 22:38:29.836288 | orchestrator | 2025-06-01 22:38:29.837386 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-01 22:38:29.838780 | orchestrator | Sunday 01 June 2025 22:38:29 +0000 (0:00:00.776) 0:00:00.924 *********** 2025-06-01 22:38:32.409175 | orchestrator | ok: [testbed-manager] 2025-06-01 22:38:32.409347 | orchestrator | 2025-06-01 22:38:32.410464 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-01 22:38:32.412755 | orchestrator | 2025-06-01 22:38:32.413961 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-01 22:38:32.415356 | orchestrator | Sunday 01 June 2025 22:38:32 +0000 (0:00:02.571) 0:00:03.495 *********** 2025-06-01 22:38:34.239972 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:38:34.242872 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:38:34.247053 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:38:34.247080 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:38:34.247089 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:38:34.247133 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:38:34.247961 | orchestrator | 2025-06-01 22:38:34.248875 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-01 22:38:34.249789 | orchestrator | 2025-06-01 22:38:34.250557 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-01 22:38:34.251569 | orchestrator | Sunday 01 June 2025 22:38:34 +0000 (0:00:01.834) 0:00:05.330 *********** 2025-06-01 22:38:35.725002 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 22:38:35.725194 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 22:38:35.725276 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 22:38:35.725574 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 22:38:35.725927 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 22:38:35.726577 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 22:38:35.727233 | orchestrator | 2025-06-01 22:38:35.727922 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-01 22:38:35.728568 | orchestrator | Sunday 01 June 2025 22:38:35 +0000 (0:00:01.483) 0:00:06.813 *********** 2025-06-01 22:38:39.452378 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:38:39.453030 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:38:39.455235 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:38:39.455507 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:38:39.457058 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:38:39.457784 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:38:39.458479 | orchestrator | 2025-06-01 22:38:39.459879 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-01 22:38:39.460462 | orchestrator | Sunday 01 June 2025 22:38:39 +0000 (0:00:03.729) 0:00:10.543 *********** 2025-06-01 22:38:39.605554 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:38:39.685068 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:38:39.760910 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:38:39.845078 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:38:40.160338 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:38:40.160836 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:38:40.161705 | orchestrator | 2025-06-01 22:38:40.162777 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-01 22:38:40.164501 | orchestrator | 2025-06-01 22:38:40.165719 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-01 22:38:40.167032 | orchestrator | Sunday 01 June 2025 22:38:40 +0000 (0:00:00.710) 0:00:11.253 *********** 2025-06-01 22:38:41.822851 | orchestrator | changed: [testbed-manager] 2025-06-01 22:38:41.823056 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:38:41.824988 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:38:41.825791 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:38:41.826640 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:38:41.828092 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:38:41.828802 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:38:41.829758 | orchestrator | 2025-06-01 22:38:41.830351 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-01 22:38:41.831186 | orchestrator | Sunday 01 June 2025 22:38:41 +0000 (0:00:01.659) 0:00:12.913 *********** 2025-06-01 22:38:43.458941 | orchestrator | changed: [testbed-manager] 2025-06-01 22:38:43.459396 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:38:43.460452 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:38:43.460960 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:38:43.465279 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:38:43.465464 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:38:43.465889 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:38:43.466631 | orchestrator | 2025-06-01 22:38:43.467258 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-01 22:38:43.469906 | orchestrator | Sunday 01 June 2025 22:38:43 +0000 (0:00:01.632) 0:00:14.546 *********** 2025-06-01 22:38:44.954909 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:38:44.956378 | orchestrator | ok: [testbed-manager] 2025-06-01 22:38:44.956994 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:38:44.958200 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:38:44.960195 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:38:44.962326 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:38:44.962350 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:38:44.965001 | orchestrator | 2025-06-01 22:38:44.965024 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-01 22:38:44.965039 | orchestrator | Sunday 01 June 2025 22:38:44 +0000 (0:00:01.498) 0:00:16.044 *********** 2025-06-01 22:38:46.728666 | orchestrator | changed: [testbed-manager] 2025-06-01 22:38:46.729256 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:38:46.731278 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:38:46.735248 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:38:46.735333 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:38:46.735348 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:38:46.735719 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:38:46.736720 | orchestrator | 2025-06-01 22:38:46.737282 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-01 22:38:46.737862 | orchestrator | Sunday 01 June 2025 22:38:46 +0000 (0:00:01.770) 0:00:17.814 *********** 2025-06-01 22:38:46.908827 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:38:46.991722 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:38:47.070542 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:38:47.147547 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:38:47.229182 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:38:47.345039 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:38:47.347074 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:38:47.350920 | orchestrator | 2025-06-01 22:38:47.353052 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-01 22:38:47.353270 | orchestrator | 2025-06-01 22:38:47.358184 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-01 22:38:47.358228 | orchestrator | Sunday 01 June 2025 22:38:47 +0000 (0:00:00.620) 0:00:18.435 *********** 2025-06-01 22:38:49.926238 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:38:49.926487 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:38:49.926785 | orchestrator | ok: [testbed-manager] 2025-06-01 22:38:49.927367 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:38:49.928092 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:38:49.928699 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:38:49.929232 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:38:49.929730 | orchestrator | 2025-06-01 22:38:49.930730 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:38:49.931159 | orchestrator | 2025-06-01 22:38:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:38:49.931187 | orchestrator | 2025-06-01 22:38:49 | INFO  | Please wait and do not abort execution. 2025-06-01 22:38:49.932589 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:38:49.932755 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:38:49.933371 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:38:49.933948 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:38:49.934744 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:38:49.935518 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:38:49.935799 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:38:49.937244 | orchestrator | 2025-06-01 22:38:49.937668 | orchestrator | 2025-06-01 22:38:49.938650 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:38:49.938960 | orchestrator | Sunday 01 June 2025 22:38:49 +0000 (0:00:02.583) 0:00:21.018 *********** 2025-06-01 22:38:49.939767 | orchestrator | =============================================================================== 2025-06-01 22:38:49.940255 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.73s 2025-06-01 22:38:49.941087 | orchestrator | Install python3-docker -------------------------------------------------- 2.58s 2025-06-01 22:38:49.941307 | orchestrator | Apply netplan configuration --------------------------------------------- 2.57s 2025-06-01 22:38:49.941632 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2025-06-01 22:38:49.941934 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.77s 2025-06-01 22:38:49.942739 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.66s 2025-06-01 22:38:49.943334 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.63s 2025-06-01 22:38:49.943737 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.50s 2025-06-01 22:38:49.944248 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.48s 2025-06-01 22:38:49.944801 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.78s 2025-06-01 22:38:49.945233 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2025-06-01 22:38:49.945887 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2025-06-01 22:38:50.574463 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-01 22:38:52.276879 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:38:52.276983 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:38:52.276999 | orchestrator | Registering Redlock._release_script 2025-06-01 22:38:52.341822 | orchestrator | 2025-06-01 22:38:52 | INFO  | Task 7b659a8d-5a49-4ea3-a960-71faba7bee20 (reboot) was prepared for execution. 2025-06-01 22:38:52.341903 | orchestrator | 2025-06-01 22:38:52 | INFO  | It takes a moment until task 7b659a8d-5a49-4ea3-a960-71faba7bee20 (reboot) has been started and output is visible here. 2025-06-01 22:38:56.483978 | orchestrator | 2025-06-01 22:38:56.484363 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 22:38:56.486441 | orchestrator | 2025-06-01 22:38:56.486702 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 22:38:56.489033 | orchestrator | Sunday 01 June 2025 22:38:56 +0000 (0:00:00.211) 0:00:00.211 *********** 2025-06-01 22:38:56.595606 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:38:56.595911 | orchestrator | 2025-06-01 22:38:56.597486 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 22:38:56.599883 | orchestrator | Sunday 01 June 2025 22:38:56 +0000 (0:00:00.114) 0:00:00.326 *********** 2025-06-01 22:38:57.508268 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:38:57.508401 | orchestrator | 2025-06-01 22:38:57.508779 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 22:38:57.510295 | orchestrator | Sunday 01 June 2025 22:38:57 +0000 (0:00:00.912) 0:00:01.238 *********** 2025-06-01 22:38:57.629767 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:38:57.630076 | orchestrator | 2025-06-01 22:38:57.632103 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 22:38:57.632812 | orchestrator | 2025-06-01 22:38:57.633584 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 22:38:57.634207 | orchestrator | Sunday 01 June 2025 22:38:57 +0000 (0:00:00.121) 0:00:01.360 *********** 2025-06-01 22:38:57.740864 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:38:57.741964 | orchestrator | 2025-06-01 22:38:57.743399 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 22:38:57.744752 | orchestrator | Sunday 01 June 2025 22:38:57 +0000 (0:00:00.110) 0:00:01.470 *********** 2025-06-01 22:38:58.440621 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:38:58.441093 | orchestrator | 2025-06-01 22:38:58.442646 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 22:38:58.443241 | orchestrator | Sunday 01 June 2025 22:38:58 +0000 (0:00:00.700) 0:00:02.171 *********** 2025-06-01 22:38:58.564195 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:38:58.565498 | orchestrator | 2025-06-01 22:38:58.567833 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 22:38:58.569009 | orchestrator | 2025-06-01 22:38:58.570279 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 22:38:58.570611 | orchestrator | Sunday 01 June 2025 22:38:58 +0000 (0:00:00.120) 0:00:02.291 *********** 2025-06-01 22:38:58.776223 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:38:58.777059 | orchestrator | 2025-06-01 22:38:58.778208 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 22:38:58.779090 | orchestrator | Sunday 01 June 2025 22:38:58 +0000 (0:00:00.215) 0:00:02.507 *********** 2025-06-01 22:38:59.423646 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:38:59.423764 | orchestrator | 2025-06-01 22:38:59.426807 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 22:38:59.427354 | orchestrator | Sunday 01 June 2025 22:38:59 +0000 (0:00:00.646) 0:00:03.153 *********** 2025-06-01 22:38:59.575085 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:38:59.575263 | orchestrator | 2025-06-01 22:38:59.576181 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 22:38:59.576707 | orchestrator | 2025-06-01 22:38:59.578701 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 22:38:59.579796 | orchestrator | Sunday 01 June 2025 22:38:59 +0000 (0:00:00.146) 0:00:03.300 *********** 2025-06-01 22:38:59.686173 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:38:59.686294 | orchestrator | 2025-06-01 22:38:59.688666 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 22:38:59.689463 | orchestrator | Sunday 01 June 2025 22:38:59 +0000 (0:00:00.113) 0:00:03.413 *********** 2025-06-01 22:39:00.359835 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:39:00.360024 | orchestrator | 2025-06-01 22:39:00.361101 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 22:39:00.361700 | orchestrator | Sunday 01 June 2025 22:39:00 +0000 (0:00:00.676) 0:00:04.090 *********** 2025-06-01 22:39:00.480693 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:39:00.481430 | orchestrator | 2025-06-01 22:39:00.482285 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 22:39:00.483250 | orchestrator | 2025-06-01 22:39:00.484400 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 22:39:00.485404 | orchestrator | Sunday 01 June 2025 22:39:00 +0000 (0:00:00.118) 0:00:04.209 *********** 2025-06-01 22:39:00.596734 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:39:00.597473 | orchestrator | 2025-06-01 22:39:00.597693 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 22:39:00.598297 | orchestrator | Sunday 01 June 2025 22:39:00 +0000 (0:00:00.114) 0:00:04.324 *********** 2025-06-01 22:39:01.273730 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:39:01.275519 | orchestrator | 2025-06-01 22:39:01.276280 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 22:39:01.277277 | orchestrator | Sunday 01 June 2025 22:39:01 +0000 (0:00:00.675) 0:00:04.999 *********** 2025-06-01 22:39:01.381211 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:39:01.382348 | orchestrator | 2025-06-01 22:39:01.383806 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 22:39:01.384946 | orchestrator | 2025-06-01 22:39:01.386304 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 22:39:01.386662 | orchestrator | Sunday 01 June 2025 22:39:01 +0000 (0:00:00.110) 0:00:05.109 *********** 2025-06-01 22:39:01.473281 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:39:01.474377 | orchestrator | 2025-06-01 22:39:01.475545 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 22:39:01.476592 | orchestrator | Sunday 01 June 2025 22:39:01 +0000 (0:00:00.094) 0:00:05.204 *********** 2025-06-01 22:39:02.173566 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:39:02.173707 | orchestrator | 2025-06-01 22:39:02.173726 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 22:39:02.173742 | orchestrator | Sunday 01 June 2025 22:39:02 +0000 (0:00:00.695) 0:00:05.899 *********** 2025-06-01 22:39:02.215582 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:39:02.215923 | orchestrator | 2025-06-01 22:39:02.217379 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:39:02.217568 | orchestrator | 2025-06-01 22:39:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:39:02.217684 | orchestrator | 2025-06-01 22:39:02 | INFO  | Please wait and do not abort execution. 2025-06-01 22:39:02.219143 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:39:02.219628 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:39:02.220564 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:39:02.220993 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:39:02.221778 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:39:02.222465 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:39:02.223150 | orchestrator | 2025-06-01 22:39:02.224777 | orchestrator | 2025-06-01 22:39:02.225676 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:39:02.226746 | orchestrator | Sunday 01 June 2025 22:39:02 +0000 (0:00:00.047) 0:00:05.946 *********** 2025-06-01 22:39:02.227588 | orchestrator | =============================================================================== 2025-06-01 22:39:02.228590 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.31s 2025-06-01 22:39:02.229518 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.76s 2025-06-01 22:39:02.230351 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.67s 2025-06-01 22:39:02.844427 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-01 22:39:04.536050 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:39:04.536244 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:39:04.536298 | orchestrator | Registering Redlock._release_script 2025-06-01 22:39:04.599375 | orchestrator | 2025-06-01 22:39:04 | INFO  | Task 6ee99032-040e-446d-a0b0-3dbeac4e7af0 (wait-for-connection) was prepared for execution. 2025-06-01 22:39:04.599490 | orchestrator | 2025-06-01 22:39:04 | INFO  | It takes a moment until task 6ee99032-040e-446d-a0b0-3dbeac4e7af0 (wait-for-connection) has been started and output is visible here. 2025-06-01 22:39:08.848084 | orchestrator | 2025-06-01 22:39:08.848392 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-01 22:39:08.849556 | orchestrator | 2025-06-01 22:39:08.850420 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-01 22:39:08.852716 | orchestrator | Sunday 01 June 2025 22:39:08 +0000 (0:00:00.284) 0:00:00.284 *********** 2025-06-01 22:39:21.362457 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:39:21.362610 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:39:21.362627 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:39:21.362639 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:39:21.363362 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:39:21.364703 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:39:21.366153 | orchestrator | 2025-06-01 22:39:21.369150 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:39:21.369199 | orchestrator | 2025-06-01 22:39:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:39:21.369215 | orchestrator | 2025-06-01 22:39:21 | INFO  | Please wait and do not abort execution. 2025-06-01 22:39:21.370134 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:39:21.371338 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:39:21.371870 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:39:21.372785 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:39:21.373704 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:39:21.374466 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:39:21.375347 | orchestrator | 2025-06-01 22:39:21.375772 | orchestrator | 2025-06-01 22:39:21.376540 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:39:21.378141 | orchestrator | Sunday 01 June 2025 22:39:21 +0000 (0:00:12.511) 0:00:12.796 *********** 2025-06-01 22:39:21.378166 | orchestrator | =============================================================================== 2025-06-01 22:39:21.378178 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.51s 2025-06-01 22:39:21.997643 | orchestrator | + osism apply hddtemp 2025-06-01 22:39:23.778562 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:39:23.778691 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:39:23.778707 | orchestrator | Registering Redlock._release_script 2025-06-01 22:39:23.836512 | orchestrator | 2025-06-01 22:39:23 | INFO  | Task badf76ba-0d09-41b0-8b51-9861b4974a62 (hddtemp) was prepared for execution. 2025-06-01 22:39:23.836624 | orchestrator | 2025-06-01 22:39:23 | INFO  | It takes a moment until task badf76ba-0d09-41b0-8b51-9861b4974a62 (hddtemp) has been started and output is visible here. 2025-06-01 22:39:27.984440 | orchestrator | 2025-06-01 22:39:27.987564 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-01 22:39:27.988235 | orchestrator | 2025-06-01 22:39:27.989226 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-01 22:39:27.990841 | orchestrator | Sunday 01 June 2025 22:39:27 +0000 (0:00:00.263) 0:00:00.263 *********** 2025-06-01 22:39:28.153008 | orchestrator | ok: [testbed-manager] 2025-06-01 22:39:28.246279 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:39:28.337418 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:39:28.428012 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:39:28.624760 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:39:28.738720 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:39:28.740115 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:39:28.741393 | orchestrator | 2025-06-01 22:39:28.743742 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-01 22:39:28.744366 | orchestrator | Sunday 01 June 2025 22:39:28 +0000 (0:00:00.754) 0:00:01.018 *********** 2025-06-01 22:39:29.908988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:39:29.909149 | orchestrator | 2025-06-01 22:39:29.909940 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-01 22:39:29.911130 | orchestrator | Sunday 01 June 2025 22:39:29 +0000 (0:00:01.163) 0:00:02.181 *********** 2025-06-01 22:39:31.881348 | orchestrator | ok: [testbed-manager] 2025-06-01 22:39:31.881485 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:39:31.881836 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:39:31.882841 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:39:31.884008 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:39:31.887000 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:39:31.888516 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:39:31.888855 | orchestrator | 2025-06-01 22:39:31.890619 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-01 22:39:31.891570 | orchestrator | Sunday 01 June 2025 22:39:31 +0000 (0:00:01.980) 0:00:04.161 *********** 2025-06-01 22:39:32.546239 | orchestrator | changed: [testbed-manager] 2025-06-01 22:39:32.630938 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:39:33.087059 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:39:33.087940 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:39:33.089617 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:39:33.090389 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:39:33.091421 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:39:33.092750 | orchestrator | 2025-06-01 22:39:33.093705 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-01 22:39:33.095275 | orchestrator | Sunday 01 June 2025 22:39:33 +0000 (0:00:01.202) 0:00:05.364 *********** 2025-06-01 22:39:34.227313 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:39:34.227524 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:39:34.230433 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:39:34.230467 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:39:34.230479 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:39:34.231791 | orchestrator | ok: [testbed-manager] 2025-06-01 22:39:34.232544 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:39:34.232987 | orchestrator | 2025-06-01 22:39:34.233805 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-01 22:39:34.234592 | orchestrator | Sunday 01 June 2025 22:39:34 +0000 (0:00:01.143) 0:00:06.508 *********** 2025-06-01 22:39:34.690949 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:39:34.773211 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:39:34.850942 | orchestrator | changed: [testbed-manager] 2025-06-01 22:39:34.931985 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:39:35.087546 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:39:35.090434 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:39:35.090488 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:39:35.091592 | orchestrator | 2025-06-01 22:39:35.092477 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-01 22:39:35.093804 | orchestrator | Sunday 01 June 2025 22:39:35 +0000 (0:00:00.856) 0:00:07.365 *********** 2025-06-01 22:39:47.041372 | orchestrator | changed: [testbed-manager] 2025-06-01 22:39:47.041533 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:39:47.041891 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:39:47.041915 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:39:47.042212 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:39:47.043741 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:39:47.045727 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:39:47.046269 | orchestrator | 2025-06-01 22:39:47.046874 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-01 22:39:47.047779 | orchestrator | Sunday 01 June 2025 22:39:47 +0000 (0:00:11.953) 0:00:19.318 *********** 2025-06-01 22:39:48.453679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:39:48.454436 | orchestrator | 2025-06-01 22:39:48.455362 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-01 22:39:48.455773 | orchestrator | Sunday 01 June 2025 22:39:48 +0000 (0:00:01.414) 0:00:20.732 *********** 2025-06-01 22:39:50.303779 | orchestrator | changed: [testbed-manager] 2025-06-01 22:39:50.304261 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:39:50.305521 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:39:50.306206 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:39:50.307202 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:39:50.308813 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:39:50.309337 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:39:50.309515 | orchestrator | 2025-06-01 22:39:50.311121 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:39:50.311165 | orchestrator | 2025-06-01 22:39:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:39:50.311179 | orchestrator | 2025-06-01 22:39:50 | INFO  | Please wait and do not abort execution. 2025-06-01 22:39:50.311931 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:39:50.313093 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:39:50.313540 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:39:50.314366 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:39:50.315481 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:39:50.315854 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:39:50.316504 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:39:50.317346 | orchestrator | 2025-06-01 22:39:50.317523 | orchestrator | 2025-06-01 22:39:50.318283 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:39:50.319614 | orchestrator | Sunday 01 June 2025 22:39:50 +0000 (0:00:01.852) 0:00:22.585 *********** 2025-06-01 22:39:50.319926 | orchestrator | =============================================================================== 2025-06-01 22:39:50.320662 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.95s 2025-06-01 22:39:50.320736 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.98s 2025-06-01 22:39:50.321411 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.85s 2025-06-01 22:39:50.321871 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.41s 2025-06-01 22:39:50.322242 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.20s 2025-06-01 22:39:50.322503 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.16s 2025-06-01 22:39:50.322996 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.14s 2025-06-01 22:39:50.324093 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.86s 2025-06-01 22:39:50.324745 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2025-06-01 22:39:50.959465 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-06-01 22:39:52.472461 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-01 22:39:52.472595 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-01 22:39:52.472610 | orchestrator | + local max_attempts=60 2025-06-01 22:39:52.472622 | orchestrator | + local name=ceph-ansible 2025-06-01 22:39:52.472632 | orchestrator | + local attempt_num=1 2025-06-01 22:39:52.472643 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-01 22:39:52.515514 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 22:39:52.515601 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-01 22:39:52.515622 | orchestrator | + local max_attempts=60 2025-06-01 22:39:52.515640 | orchestrator | + local name=kolla-ansible 2025-06-01 22:39:52.515657 | orchestrator | + local attempt_num=1 2025-06-01 22:39:52.516165 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-01 22:39:52.550279 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 22:39:52.550366 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-01 22:39:52.550375 | orchestrator | + local max_attempts=60 2025-06-01 22:39:52.550382 | orchestrator | + local name=osism-ansible 2025-06-01 22:39:52.550389 | orchestrator | + local attempt_num=1 2025-06-01 22:39:52.550689 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-01 22:39:52.593067 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 22:39:52.593173 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-01 22:39:52.593185 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-01 22:39:52.807138 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-01 22:39:52.973353 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-01 22:39:53.145508 | orchestrator | ARA in osism-ansible already disabled. 2025-06-01 22:39:53.339451 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-01 22:39:53.340255 | orchestrator | + osism apply gather-facts 2025-06-01 22:39:55.058308 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:39:55.058414 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:39:55.058425 | orchestrator | Registering Redlock._release_script 2025-06-01 22:39:55.135410 | orchestrator | 2025-06-01 22:39:55 | INFO  | Task ba01140b-a08b-4a14-8209-48d4aa3b17d9 (gather-facts) was prepared for execution. 2025-06-01 22:39:55.136465 | orchestrator | 2025-06-01 22:39:55 | INFO  | It takes a moment until task ba01140b-a08b-4a14-8209-48d4aa3b17d9 (gather-facts) has been started and output is visible here. 2025-06-01 22:39:59.145964 | orchestrator | 2025-06-01 22:39:59.147077 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 22:39:59.148941 | orchestrator | 2025-06-01 22:39:59.149980 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 22:39:59.151296 | orchestrator | Sunday 01 June 2025 22:39:59 +0000 (0:00:00.220) 0:00:00.220 *********** 2025-06-01 22:40:04.316066 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:40:04.316881 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:40:04.317382 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:40:04.322281 | orchestrator | ok: [testbed-manager] 2025-06-01 22:40:04.322810 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:40:04.324583 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:40:04.326647 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:40:04.327213 | orchestrator | 2025-06-01 22:40:04.327819 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-01 22:40:04.328795 | orchestrator | 2025-06-01 22:40:04.329252 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-01 22:40:04.330296 | orchestrator | Sunday 01 June 2025 22:40:04 +0000 (0:00:05.175) 0:00:05.396 *********** 2025-06-01 22:40:04.478553 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:40:04.557822 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:40:04.659761 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:40:04.747639 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:40:04.826485 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:04.873299 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:04.873972 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:40:04.874969 | orchestrator | 2025-06-01 22:40:04.876303 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:40:04.876583 | orchestrator | 2025-06-01 22:40:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:40:04.877361 | orchestrator | 2025-06-01 22:40:04 | INFO  | Please wait and do not abort execution. 2025-06-01 22:40:04.878274 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:40:04.878600 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:40:04.879697 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:40:04.880510 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:40:04.881807 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:40:04.882228 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:40:04.883197 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:40:04.883870 | orchestrator | 2025-06-01 22:40:04.884647 | orchestrator | 2025-06-01 22:40:04.885315 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:40:04.885672 | orchestrator | Sunday 01 June 2025 22:40:04 +0000 (0:00:00.556) 0:00:05.953 *********** 2025-06-01 22:40:04.886602 | orchestrator | =============================================================================== 2025-06-01 22:40:04.887179 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.18s 2025-06-01 22:40:04.887895 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-06-01 22:40:05.579014 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-01 22:40:05.597504 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-01 22:40:05.611499 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-01 22:40:05.625886 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-01 22:40:05.640104 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-01 22:40:05.656068 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-01 22:40:05.672447 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-01 22:40:05.691729 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-01 22:40:05.709553 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-01 22:40:05.728472 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-01 22:40:05.752331 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-01 22:40:05.775014 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-01 22:40:05.797820 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-01 22:40:05.817791 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-01 22:40:05.839127 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-01 22:40:05.854345 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-01 22:40:05.873939 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-01 22:40:05.895327 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-01 22:40:05.912875 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-01 22:40:05.931714 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-01 22:40:05.951954 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-01 22:40:06.152727 | orchestrator | ok: Runtime: 0:18:58.052447 2025-06-01 22:40:06.288757 | 2025-06-01 22:40:06.288966 | TASK [Deploy services] 2025-06-01 22:40:06.821864 | orchestrator | skipping: Conditional result was False 2025-06-01 22:40:06.839128 | 2025-06-01 22:40:06.839324 | TASK [Deploy in a nutshell] 2025-06-01 22:40:07.514100 | orchestrator | + set -e 2025-06-01 22:40:07.514291 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 22:40:07.514317 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 22:40:07.514338 | orchestrator | ++ INTERACTIVE=false 2025-06-01 22:40:07.514352 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 22:40:07.514365 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 22:40:07.514379 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 22:40:07.514423 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 22:40:07.514452 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 22:40:07.514467 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 22:40:07.514483 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 22:40:07.514495 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 22:40:07.514513 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 22:40:07.514525 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-01 22:40:07.514545 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-01 22:40:07.514556 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 22:40:07.514570 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 22:40:07.514581 | orchestrator | ++ export ARA=false 2025-06-01 22:40:07.514593 | orchestrator | ++ ARA=false 2025-06-01 22:40:07.514604 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 22:40:07.514616 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 22:40:07.514627 | orchestrator | ++ export TEMPEST=false 2025-06-01 22:40:07.514638 | orchestrator | ++ TEMPEST=false 2025-06-01 22:40:07.514662 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 22:40:07.514674 | orchestrator | ++ IS_ZUUL=true 2025-06-01 22:40:07.514685 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-06-01 22:40:07.514697 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-06-01 22:40:07.514708 | orchestrator | 2025-06-01 22:40:07.514720 | orchestrator | # PULL IMAGES 2025-06-01 22:40:07.514731 | orchestrator | 2025-06-01 22:40:07.514742 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 22:40:07.514753 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 22:40:07.514764 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 22:40:07.514776 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 22:40:07.514786 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 22:40:07.514797 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 22:40:07.514809 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 22:40:07.514827 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 22:40:07.514839 | orchestrator | + echo 2025-06-01 22:40:07.514850 | orchestrator | + echo '# PULL IMAGES' 2025-06-01 22:40:07.514861 | orchestrator | + echo 2025-06-01 22:40:07.516069 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-01 22:40:07.583793 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-01 22:40:07.583897 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-01 22:40:09.353389 | orchestrator | 2025-06-01 22:40:09 | INFO  | Trying to run play pull-images in environment custom 2025-06-01 22:40:09.358118 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:40:09.358185 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:40:09.358198 | orchestrator | Registering Redlock._release_script 2025-06-01 22:40:09.422876 | orchestrator | 2025-06-01 22:40:09 | INFO  | Task 23be6845-42fe-4797-b34f-70647887635d (pull-images) was prepared for execution. 2025-06-01 22:40:09.422981 | orchestrator | 2025-06-01 22:40:09 | INFO  | It takes a moment until task 23be6845-42fe-4797-b34f-70647887635d (pull-images) has been started and output is visible here. 2025-06-01 22:40:13.418904 | orchestrator | 2025-06-01 22:40:13.419747 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-01 22:40:13.420134 | orchestrator | 2025-06-01 22:40:13.420899 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-01 22:40:13.422378 | orchestrator | Sunday 01 June 2025 22:40:13 +0000 (0:00:00.148) 0:00:00.148 *********** 2025-06-01 22:41:16.902695 | orchestrator | changed: [testbed-manager] 2025-06-01 22:41:16.902823 | orchestrator | 2025-06-01 22:41:16.902844 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-01 22:41:16.902857 | orchestrator | Sunday 01 June 2025 22:41:16 +0000 (0:01:03.485) 0:01:03.633 *********** 2025-06-01 22:42:13.885012 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-01 22:42:13.885148 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-01 22:42:13.887419 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-01 22:42:13.887449 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-01 22:42:13.890410 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-01 22:42:13.890647 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-01 22:42:13.891747 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-01 22:42:13.892856 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-01 22:42:13.893869 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-01 22:42:13.895404 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-01 22:42:13.896042 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-01 22:42:13.896524 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-01 22:42:13.897477 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-01 22:42:13.897990 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-01 22:42:13.898663 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-01 22:42:13.898856 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-01 22:42:13.899242 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-01 22:42:13.899726 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-01 22:42:13.900245 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-01 22:42:13.901056 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-01 22:42:13.902783 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-01 22:42:13.903637 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-01 22:42:13.904480 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-01 22:42:13.905183 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-01 22:42:13.906194 | orchestrator | 2025-06-01 22:42:13.906822 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:42:13.907511 | orchestrator | 2025-06-01 22:42:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:42:13.907534 | orchestrator | 2025-06-01 22:42:13 | INFO  | Please wait and do not abort execution. 2025-06-01 22:42:13.908457 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:42:13.908885 | orchestrator | 2025-06-01 22:42:13.909425 | orchestrator | 2025-06-01 22:42:13.909534 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:42:13.910222 | orchestrator | Sunday 01 June 2025 22:42:13 +0000 (0:00:56.982) 0:02:00.616 *********** 2025-06-01 22:42:13.910711 | orchestrator | =============================================================================== 2025-06-01 22:42:13.911088 | orchestrator | Pull keystone image ---------------------------------------------------- 63.49s 2025-06-01 22:42:13.911438 | orchestrator | Pull other images ------------------------------------------------------ 56.98s 2025-06-01 22:42:16.348001 | orchestrator | 2025-06-01 22:42:16 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-01 22:42:16.353358 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:42:16.353390 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:42:16.353403 | orchestrator | Registering Redlock._release_script 2025-06-01 22:42:16.407498 | orchestrator | 2025-06-01 22:42:16 | INFO  | Task b7608b13-4c22-4b4c-803b-352a3344aca5 (wipe-partitions) was prepared for execution. 2025-06-01 22:42:16.407547 | orchestrator | 2025-06-01 22:42:16 | INFO  | It takes a moment until task b7608b13-4c22-4b4c-803b-352a3344aca5 (wipe-partitions) has been started and output is visible here. 2025-06-01 22:42:20.522680 | orchestrator | 2025-06-01 22:42:20.524411 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-01 22:42:20.524598 | orchestrator | 2025-06-01 22:42:20.527564 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-01 22:42:20.527772 | orchestrator | Sunday 01 June 2025 22:42:20 +0000 (0:00:00.162) 0:00:00.162 *********** 2025-06-01 22:42:21.102236 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:42:21.102389 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:42:21.107402 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:42:21.110359 | orchestrator | 2025-06-01 22:42:21.110807 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-01 22:42:21.119060 | orchestrator | Sunday 01 June 2025 22:42:21 +0000 (0:00:00.583) 0:00:00.746 *********** 2025-06-01 22:42:21.242109 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:21.318233 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:42:21.318459 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:42:21.318578 | orchestrator | 2025-06-01 22:42:21.318844 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-01 22:42:21.319126 | orchestrator | Sunday 01 June 2025 22:42:21 +0000 (0:00:00.217) 0:00:00.963 *********** 2025-06-01 22:42:22.018762 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:42:22.019075 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:42:22.021252 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:42:22.022318 | orchestrator | 2025-06-01 22:42:22.022467 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-01 22:42:22.022683 | orchestrator | Sunday 01 June 2025 22:42:22 +0000 (0:00:00.700) 0:00:01.663 *********** 2025-06-01 22:42:22.191480 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:22.304658 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:42:22.304746 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:42:22.304857 | orchestrator | 2025-06-01 22:42:22.310192 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-01 22:42:22.311721 | orchestrator | Sunday 01 June 2025 22:42:22 +0000 (0:00:00.280) 0:00:01.943 *********** 2025-06-01 22:42:23.478645 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-01 22:42:23.479172 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-01 22:42:23.482505 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-01 22:42:23.482684 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-01 22:42:23.483068 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-01 22:42:23.483388 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-01 22:42:23.483775 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-01 22:42:23.484125 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-01 22:42:23.484474 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-01 22:42:23.484882 | orchestrator | 2025-06-01 22:42:23.485301 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-01 22:42:23.486278 | orchestrator | Sunday 01 June 2025 22:42:23 +0000 (0:00:01.179) 0:00:03.123 *********** 2025-06-01 22:42:24.839097 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-01 22:42:24.839753 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-01 22:42:24.841690 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-01 22:42:24.842500 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-01 22:42:24.843567 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-01 22:42:24.844174 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-01 22:42:24.845131 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-01 22:42:24.845615 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-01 22:42:24.846118 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-01 22:42:24.847053 | orchestrator | 2025-06-01 22:42:24.847304 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-01 22:42:24.848009 | orchestrator | Sunday 01 June 2025 22:42:24 +0000 (0:00:01.356) 0:00:04.480 *********** 2025-06-01 22:42:27.203144 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-01 22:42:27.205335 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-01 22:42:27.205367 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-01 22:42:27.205953 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-01 22:42:27.206435 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-01 22:42:27.208408 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-01 22:42:27.209081 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-01 22:42:27.210548 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-01 22:42:27.211435 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-01 22:42:27.212440 | orchestrator | 2025-06-01 22:42:27.213538 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-01 22:42:27.214184 | orchestrator | Sunday 01 June 2025 22:42:27 +0000 (0:00:02.361) 0:00:06.842 *********** 2025-06-01 22:42:27.825376 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:42:27.826274 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:42:27.830347 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:42:27.830406 | orchestrator | 2025-06-01 22:42:27.830432 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-01 22:42:27.830456 | orchestrator | Sunday 01 June 2025 22:42:27 +0000 (0:00:00.625) 0:00:07.467 *********** 2025-06-01 22:42:28.459071 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:42:28.461685 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:42:28.467141 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:42:28.467168 | orchestrator | 2025-06-01 22:42:28.467600 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:42:28.468317 | orchestrator | 2025-06-01 22:42:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:42:28.469253 | orchestrator | 2025-06-01 22:42:28 | INFO  | Please wait and do not abort execution. 2025-06-01 22:42:28.470746 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:42:28.473166 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:42:28.474121 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:42:28.474393 | orchestrator | 2025-06-01 22:42:28.475105 | orchestrator | 2025-06-01 22:42:28.475444 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:42:28.475820 | orchestrator | Sunday 01 June 2025 22:42:28 +0000 (0:00:00.632) 0:00:08.100 *********** 2025-06-01 22:42:28.476466 | orchestrator | =============================================================================== 2025-06-01 22:42:28.476724 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.36s 2025-06-01 22:42:28.477458 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2025-06-01 22:42:28.477688 | orchestrator | Check device availability ----------------------------------------------- 1.18s 2025-06-01 22:42:28.478234 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.70s 2025-06-01 22:42:28.478568 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-06-01 22:42:28.479296 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-06-01 22:42:28.479430 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2025-06-01 22:42:28.480117 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-06-01 22:42:28.480343 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2025-06-01 22:42:30.790007 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:42:30.790206 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:42:30.790221 | orchestrator | Registering Redlock._release_script 2025-06-01 22:42:30.847875 | orchestrator | 2025-06-01 22:42:30 | INFO  | Task 04f2c21f-d425-4d7d-901c-979492ed6db6 (facts) was prepared for execution. 2025-06-01 22:42:30.848038 | orchestrator | 2025-06-01 22:42:30 | INFO  | It takes a moment until task 04f2c21f-d425-4d7d-901c-979492ed6db6 (facts) has been started and output is visible here. 2025-06-01 22:42:35.053616 | orchestrator | 2025-06-01 22:42:35.055204 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-01 22:42:35.056459 | orchestrator | 2025-06-01 22:42:35.057709 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-01 22:42:35.060858 | orchestrator | Sunday 01 June 2025 22:42:35 +0000 (0:00:00.271) 0:00:00.271 *********** 2025-06-01 22:42:36.187018 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:42:36.188026 | orchestrator | ok: [testbed-manager] 2025-06-01 22:42:36.189750 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:42:36.191773 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:42:36.192075 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:42:36.193522 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:42:36.194776 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:42:36.196212 | orchestrator | 2025-06-01 22:42:36.197099 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-01 22:42:36.198618 | orchestrator | Sunday 01 June 2025 22:42:36 +0000 (0:00:01.132) 0:00:01.404 *********** 2025-06-01 22:42:36.347959 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:42:36.433616 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:42:36.513507 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:42:36.593137 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:42:36.671854 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:37.409670 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:42:37.409774 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:42:37.409790 | orchestrator | 2025-06-01 22:42:37.410214 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 22:42:37.410252 | orchestrator | 2025-06-01 22:42:37.410271 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 22:42:37.410636 | orchestrator | Sunday 01 June 2025 22:42:37 +0000 (0:00:01.228) 0:00:02.632 *********** 2025-06-01 22:42:42.248389 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:42:42.248503 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:42:42.248518 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:42:42.248529 | orchestrator | ok: [testbed-manager] 2025-06-01 22:42:42.248539 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:42:42.248612 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:42:42.249102 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:42:42.249743 | orchestrator | 2025-06-01 22:42:42.253393 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-01 22:42:42.253479 | orchestrator | 2025-06-01 22:42:42.254083 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-01 22:42:42.254383 | orchestrator | Sunday 01 June 2025 22:42:42 +0000 (0:00:04.837) 0:00:07.469 *********** 2025-06-01 22:42:42.621130 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:42:42.702712 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:42:42.779697 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:42:42.859650 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:42:42.932236 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:42.990575 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:42:42.994095 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:42:42.995754 | orchestrator | 2025-06-01 22:42:42.997530 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:42:42.997959 | orchestrator | 2025-06-01 22:42:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:42:42.997985 | orchestrator | 2025-06-01 22:42:42 | INFO  | Please wait and do not abort execution. 2025-06-01 22:42:42.999170 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:42:43.000534 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:42:43.001795 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:42:43.002714 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:42:43.004081 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:42:43.005504 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:42:43.006673 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:42:43.007558 | orchestrator | 2025-06-01 22:42:43.009211 | orchestrator | 2025-06-01 22:42:43.010259 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:42:43.010686 | orchestrator | Sunday 01 June 2025 22:42:42 +0000 (0:00:00.744) 0:00:08.214 *********** 2025-06-01 22:42:43.011651 | orchestrator | =============================================================================== 2025-06-01 22:42:43.012636 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.84s 2025-06-01 22:42:43.013085 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2025-06-01 22:42:43.014122 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2025-06-01 22:42:43.014796 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.74s 2025-06-01 22:42:45.738494 | orchestrator | 2025-06-01 22:42:45 | INFO  | Task e7ac47dd-9f60-4ddf-8e8b-fe817bd28e7a (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-01 22:42:45.738620 | orchestrator | 2025-06-01 22:42:45 | INFO  | It takes a moment until task e7ac47dd-9f60-4ddf-8e8b-fe817bd28e7a (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-01 22:42:52.304125 | orchestrator | 2025-06-01 22:42:52.304600 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-01 22:42:52.308524 | orchestrator | 2025-06-01 22:42:52.309138 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 22:42:52.311565 | orchestrator | Sunday 01 June 2025 22:42:52 +0000 (0:00:00.478) 0:00:00.478 *********** 2025-06-01 22:42:52.598097 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 22:42:52.599279 | orchestrator | 2025-06-01 22:42:52.600489 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 22:42:52.602536 | orchestrator | Sunday 01 June 2025 22:42:52 +0000 (0:00:00.296) 0:00:00.775 *********** 2025-06-01 22:42:52.939802 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:42:52.940021 | orchestrator | 2025-06-01 22:42:52.940039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:52.940121 | orchestrator | Sunday 01 June 2025 22:42:52 +0000 (0:00:00.342) 0:00:01.117 *********** 2025-06-01 22:42:53.430816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-01 22:42:53.431059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-01 22:42:53.431690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-01 22:42:53.432823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-01 22:42:53.433655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-01 22:42:53.434471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-01 22:42:53.437161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-01 22:42:53.438367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-01 22:42:53.439955 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-01 22:42:53.440209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-01 22:42:53.441764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-01 22:42:53.445009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-01 22:42:53.448471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-01 22:42:53.448836 | orchestrator | 2025-06-01 22:42:53.455387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:53.456013 | orchestrator | Sunday 01 June 2025 22:42:53 +0000 (0:00:00.486) 0:00:01.603 *********** 2025-06-01 22:42:54.019344 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:54.020588 | orchestrator | 2025-06-01 22:42:54.021358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:54.022623 | orchestrator | Sunday 01 June 2025 22:42:54 +0000 (0:00:00.594) 0:00:02.198 *********** 2025-06-01 22:42:54.248233 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:54.249026 | orchestrator | 2025-06-01 22:42:54.251292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:54.251778 | orchestrator | Sunday 01 June 2025 22:42:54 +0000 (0:00:00.226) 0:00:02.424 *********** 2025-06-01 22:42:54.487809 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:54.488584 | orchestrator | 2025-06-01 22:42:54.489442 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:54.490676 | orchestrator | Sunday 01 June 2025 22:42:54 +0000 (0:00:00.239) 0:00:02.663 *********** 2025-06-01 22:42:54.732140 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:54.732983 | orchestrator | 2025-06-01 22:42:54.734083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:54.735314 | orchestrator | Sunday 01 June 2025 22:42:54 +0000 (0:00:00.247) 0:00:02.911 *********** 2025-06-01 22:42:54.953747 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:54.955790 | orchestrator | 2025-06-01 22:42:54.957336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:54.958265 | orchestrator | Sunday 01 June 2025 22:42:54 +0000 (0:00:00.220) 0:00:03.131 *********** 2025-06-01 22:42:55.165938 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:55.166099 | orchestrator | 2025-06-01 22:42:55.167221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:55.169298 | orchestrator | Sunday 01 June 2025 22:42:55 +0000 (0:00:00.211) 0:00:03.342 *********** 2025-06-01 22:42:55.365234 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:55.366228 | orchestrator | 2025-06-01 22:42:55.366645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:55.367435 | orchestrator | Sunday 01 June 2025 22:42:55 +0000 (0:00:00.199) 0:00:03.542 *********** 2025-06-01 22:42:55.580793 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:55.583363 | orchestrator | 2025-06-01 22:42:55.583619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:55.584395 | orchestrator | Sunday 01 June 2025 22:42:55 +0000 (0:00:00.213) 0:00:03.755 *********** 2025-06-01 22:42:56.000116 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65) 2025-06-01 22:42:56.000737 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65) 2025-06-01 22:42:56.002146 | orchestrator | 2025-06-01 22:42:56.002753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:56.003754 | orchestrator | Sunday 01 June 2025 22:42:55 +0000 (0:00:00.420) 0:00:04.176 *********** 2025-06-01 22:42:56.430416 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e3d9d8cc-8358-4e9f-a548-9ae6b89fa066) 2025-06-01 22:42:56.430543 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e3d9d8cc-8358-4e9f-a548-9ae6b89fa066) 2025-06-01 22:42:56.430825 | orchestrator | 2025-06-01 22:42:56.431134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:56.432853 | orchestrator | Sunday 01 June 2025 22:42:56 +0000 (0:00:00.428) 0:00:04.605 *********** 2025-06-01 22:42:57.069549 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bed9961c-b7ee-4957-bf35-2fee53571a5a) 2025-06-01 22:42:57.072077 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bed9961c-b7ee-4957-bf35-2fee53571a5a) 2025-06-01 22:42:57.072701 | orchestrator | 2025-06-01 22:42:57.073729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:57.074398 | orchestrator | Sunday 01 June 2025 22:42:57 +0000 (0:00:00.641) 0:00:05.246 *********** 2025-06-01 22:42:57.751660 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6d04dff8-74fe-4097-ace0-4c437e5e0f9f) 2025-06-01 22:42:57.752045 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6d04dff8-74fe-4097-ace0-4c437e5e0f9f) 2025-06-01 22:42:57.752725 | orchestrator | 2025-06-01 22:42:57.754060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:42:57.754829 | orchestrator | Sunday 01 June 2025 22:42:57 +0000 (0:00:00.683) 0:00:05.930 *********** 2025-06-01 22:42:58.548553 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 22:42:58.548658 | orchestrator | 2025-06-01 22:42:58.548673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:42:58.551520 | orchestrator | Sunday 01 June 2025 22:42:58 +0000 (0:00:00.794) 0:00:06.724 *********** 2025-06-01 22:42:58.930541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-01 22:42:58.930644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-01 22:42:58.932816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-01 22:42:58.933690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-01 22:42:58.938402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-01 22:42:58.939331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-01 22:42:58.941394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-01 22:42:58.942165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-01 22:42:58.942569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-01 22:42:58.945270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-01 22:42:58.945817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-01 22:42:58.947252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-01 22:42:58.948077 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-01 22:42:58.948642 | orchestrator | 2025-06-01 22:42:58.949562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:42:58.950562 | orchestrator | Sunday 01 June 2025 22:42:58 +0000 (0:00:00.383) 0:00:07.108 *********** 2025-06-01 22:42:59.124938 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:59.125794 | orchestrator | 2025-06-01 22:42:59.126979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:42:59.128332 | orchestrator | Sunday 01 June 2025 22:42:59 +0000 (0:00:00.196) 0:00:07.304 *********** 2025-06-01 22:42:59.320784 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:59.323850 | orchestrator | 2025-06-01 22:42:59.326641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:42:59.327363 | orchestrator | Sunday 01 June 2025 22:42:59 +0000 (0:00:00.194) 0:00:07.499 *********** 2025-06-01 22:42:59.541553 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:59.541747 | orchestrator | 2025-06-01 22:42:59.542581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:42:59.544056 | orchestrator | Sunday 01 June 2025 22:42:59 +0000 (0:00:00.218) 0:00:07.718 *********** 2025-06-01 22:42:59.746400 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:59.747962 | orchestrator | 2025-06-01 22:42:59.749170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:42:59.751950 | orchestrator | Sunday 01 June 2025 22:42:59 +0000 (0:00:00.206) 0:00:07.925 *********** 2025-06-01 22:42:59.966080 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:42:59.969064 | orchestrator | 2025-06-01 22:42:59.971643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:42:59.972542 | orchestrator | Sunday 01 June 2025 22:42:59 +0000 (0:00:00.218) 0:00:08.144 *********** 2025-06-01 22:43:00.191529 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:00.194462 | orchestrator | 2025-06-01 22:43:00.199002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:00.201505 | orchestrator | Sunday 01 June 2025 22:43:00 +0000 (0:00:00.224) 0:00:08.368 *********** 2025-06-01 22:43:00.401847 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:00.402802 | orchestrator | 2025-06-01 22:43:00.403905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:00.404485 | orchestrator | Sunday 01 June 2025 22:43:00 +0000 (0:00:00.212) 0:00:08.580 *********** 2025-06-01 22:43:00.627626 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:00.628391 | orchestrator | 2025-06-01 22:43:00.629623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:00.630249 | orchestrator | Sunday 01 June 2025 22:43:00 +0000 (0:00:00.225) 0:00:08.805 *********** 2025-06-01 22:43:01.903963 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-01 22:43:01.905008 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-01 22:43:01.907407 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-01 22:43:01.907733 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-01 22:43:01.908146 | orchestrator | 2025-06-01 22:43:01.908663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:01.910248 | orchestrator | Sunday 01 June 2025 22:43:01 +0000 (0:00:01.275) 0:00:10.081 *********** 2025-06-01 22:43:02.168826 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:02.169039 | orchestrator | 2025-06-01 22:43:02.169286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:02.170552 | orchestrator | Sunday 01 June 2025 22:43:02 +0000 (0:00:00.264) 0:00:10.346 *********** 2025-06-01 22:43:02.355233 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:02.355412 | orchestrator | 2025-06-01 22:43:02.355812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:02.356127 | orchestrator | Sunday 01 June 2025 22:43:02 +0000 (0:00:00.189) 0:00:10.535 *********** 2025-06-01 22:43:02.528702 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:02.531416 | orchestrator | 2025-06-01 22:43:02.532231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:02.533473 | orchestrator | Sunday 01 June 2025 22:43:02 +0000 (0:00:00.172) 0:00:10.708 *********** 2025-06-01 22:43:02.698906 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:02.701360 | orchestrator | 2025-06-01 22:43:02.702096 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-01 22:43:02.702304 | orchestrator | Sunday 01 June 2025 22:43:02 +0000 (0:00:00.171) 0:00:10.879 *********** 2025-06-01 22:43:02.844969 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-01 22:43:02.845169 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-01 22:43:02.845187 | orchestrator | 2025-06-01 22:43:02.845439 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-01 22:43:02.846353 | orchestrator | Sunday 01 June 2025 22:43:02 +0000 (0:00:00.145) 0:00:11.024 *********** 2025-06-01 22:43:02.962421 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:02.962585 | orchestrator | 2025-06-01 22:43:02.964531 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-01 22:43:02.964629 | orchestrator | Sunday 01 June 2025 22:43:02 +0000 (0:00:00.117) 0:00:11.143 *********** 2025-06-01 22:43:03.097346 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:03.097576 | orchestrator | 2025-06-01 22:43:03.097599 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-01 22:43:03.097827 | orchestrator | Sunday 01 June 2025 22:43:03 +0000 (0:00:00.131) 0:00:11.274 *********** 2025-06-01 22:43:03.200680 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:03.200849 | orchestrator | 2025-06-01 22:43:03.202124 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-01 22:43:03.202218 | orchestrator | Sunday 01 June 2025 22:43:03 +0000 (0:00:00.106) 0:00:11.380 *********** 2025-06-01 22:43:03.315103 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:43:03.315173 | orchestrator | 2025-06-01 22:43:03.315256 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-01 22:43:03.315647 | orchestrator | Sunday 01 June 2025 22:43:03 +0000 (0:00:00.112) 0:00:11.493 *********** 2025-06-01 22:43:03.448759 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '836f126b-3930-552c-8c28-37312a7074e3'}}) 2025-06-01 22:43:03.448951 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04cd8323-667e-5571-83c4-b35d38a67016'}}) 2025-06-01 22:43:03.451508 | orchestrator | 2025-06-01 22:43:03.451534 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-01 22:43:03.451618 | orchestrator | Sunday 01 June 2025 22:43:03 +0000 (0:00:00.136) 0:00:11.629 *********** 2025-06-01 22:43:03.569810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '836f126b-3930-552c-8c28-37312a7074e3'}})  2025-06-01 22:43:03.570407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04cd8323-667e-5571-83c4-b35d38a67016'}})  2025-06-01 22:43:03.570492 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:03.570954 | orchestrator | 2025-06-01 22:43:03.573095 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-01 22:43:03.573315 | orchestrator | Sunday 01 June 2025 22:43:03 +0000 (0:00:00.121) 0:00:11.750 *********** 2025-06-01 22:43:03.844407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '836f126b-3930-552c-8c28-37312a7074e3'}})  2025-06-01 22:43:03.844562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04cd8323-667e-5571-83c4-b35d38a67016'}})  2025-06-01 22:43:03.844578 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:03.844660 | orchestrator | 2025-06-01 22:43:03.844677 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-01 22:43:03.846385 | orchestrator | Sunday 01 June 2025 22:43:03 +0000 (0:00:00.273) 0:00:12.024 *********** 2025-06-01 22:43:03.971976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '836f126b-3930-552c-8c28-37312a7074e3'}})  2025-06-01 22:43:03.977213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04cd8323-667e-5571-83c4-b35d38a67016'}})  2025-06-01 22:43:03.977462 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:03.977600 | orchestrator | 2025-06-01 22:43:03.978091 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-01 22:43:03.978354 | orchestrator | Sunday 01 June 2025 22:43:03 +0000 (0:00:00.126) 0:00:12.150 *********** 2025-06-01 22:43:04.091322 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:43:04.091538 | orchestrator | 2025-06-01 22:43:04.091630 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-01 22:43:04.091647 | orchestrator | Sunday 01 June 2025 22:43:04 +0000 (0:00:00.121) 0:00:12.272 *********** 2025-06-01 22:43:04.203570 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:43:04.203694 | orchestrator | 2025-06-01 22:43:04.203788 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-01 22:43:04.205208 | orchestrator | Sunday 01 June 2025 22:43:04 +0000 (0:00:00.111) 0:00:12.383 *********** 2025-06-01 22:43:04.294526 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:04.294647 | orchestrator | 2025-06-01 22:43:04.294757 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-01 22:43:04.294856 | orchestrator | Sunday 01 June 2025 22:43:04 +0000 (0:00:00.091) 0:00:12.475 *********** 2025-06-01 22:43:04.393488 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:04.393627 | orchestrator | 2025-06-01 22:43:04.393737 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-01 22:43:04.394126 | orchestrator | Sunday 01 June 2025 22:43:04 +0000 (0:00:00.098) 0:00:12.573 *********** 2025-06-01 22:43:04.491194 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:04.491308 | orchestrator | 2025-06-01 22:43:04.492165 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-01 22:43:04.492338 | orchestrator | Sunday 01 June 2025 22:43:04 +0000 (0:00:00.095) 0:00:12.669 *********** 2025-06-01 22:43:04.593385 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 22:43:04.593516 | orchestrator |  "ceph_osd_devices": { 2025-06-01 22:43:04.594446 | orchestrator |  "sdb": { 2025-06-01 22:43:04.594538 | orchestrator |  "osd_lvm_uuid": "836f126b-3930-552c-8c28-37312a7074e3" 2025-06-01 22:43:04.594857 | orchestrator |  }, 2025-06-01 22:43:04.596214 | orchestrator |  "sdc": { 2025-06-01 22:43:04.598222 | orchestrator |  "osd_lvm_uuid": "04cd8323-667e-5571-83c4-b35d38a67016" 2025-06-01 22:43:04.598260 | orchestrator |  } 2025-06-01 22:43:04.598273 | orchestrator |  } 2025-06-01 22:43:04.598285 | orchestrator | } 2025-06-01 22:43:04.598298 | orchestrator | 2025-06-01 22:43:04.598312 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-01 22:43:04.598925 | orchestrator | Sunday 01 June 2025 22:43:04 +0000 (0:00:00.104) 0:00:12.773 *********** 2025-06-01 22:43:04.693556 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:04.693840 | orchestrator | 2025-06-01 22:43:04.694310 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-01 22:43:04.696159 | orchestrator | Sunday 01 June 2025 22:43:04 +0000 (0:00:00.099) 0:00:12.872 *********** 2025-06-01 22:43:04.803230 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:04.804690 | orchestrator | 2025-06-01 22:43:04.805703 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-01 22:43:04.806522 | orchestrator | Sunday 01 June 2025 22:43:04 +0000 (0:00:00.109) 0:00:12.982 *********** 2025-06-01 22:43:04.906156 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:43:04.906371 | orchestrator | 2025-06-01 22:43:04.907538 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-01 22:43:04.907826 | orchestrator | Sunday 01 June 2025 22:43:04 +0000 (0:00:00.102) 0:00:13.084 *********** 2025-06-01 22:43:05.059063 | orchestrator | changed: [testbed-node-3] => { 2025-06-01 22:43:05.059469 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-01 22:43:05.060985 | orchestrator |  "ceph_osd_devices": { 2025-06-01 22:43:05.065206 | orchestrator |  "sdb": { 2025-06-01 22:43:05.066092 | orchestrator |  "osd_lvm_uuid": "836f126b-3930-552c-8c28-37312a7074e3" 2025-06-01 22:43:05.067016 | orchestrator |  }, 2025-06-01 22:43:05.068619 | orchestrator |  "sdc": { 2025-06-01 22:43:05.069106 | orchestrator |  "osd_lvm_uuid": "04cd8323-667e-5571-83c4-b35d38a67016" 2025-06-01 22:43:05.069608 | orchestrator |  } 2025-06-01 22:43:05.070450 | orchestrator |  }, 2025-06-01 22:43:05.071049 | orchestrator |  "lvm_volumes": [ 2025-06-01 22:43:05.071674 | orchestrator |  { 2025-06-01 22:43:05.072567 | orchestrator |  "data": "osd-block-836f126b-3930-552c-8c28-37312a7074e3", 2025-06-01 22:43:05.073556 | orchestrator |  "data_vg": "ceph-836f126b-3930-552c-8c28-37312a7074e3" 2025-06-01 22:43:05.074531 | orchestrator |  }, 2025-06-01 22:43:05.074617 | orchestrator |  { 2025-06-01 22:43:05.075122 | orchestrator |  "data": "osd-block-04cd8323-667e-5571-83c4-b35d38a67016", 2025-06-01 22:43:05.075537 | orchestrator |  "data_vg": "ceph-04cd8323-667e-5571-83c4-b35d38a67016" 2025-06-01 22:43:05.075971 | orchestrator |  } 2025-06-01 22:43:05.076600 | orchestrator |  ] 2025-06-01 22:43:05.077105 | orchestrator |  } 2025-06-01 22:43:05.077742 | orchestrator | } 2025-06-01 22:43:05.078184 | orchestrator | 2025-06-01 22:43:05.078752 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-01 22:43:05.079390 | orchestrator | Sunday 01 June 2025 22:43:05 +0000 (0:00:00.153) 0:00:13.238 *********** 2025-06-01 22:43:06.956445 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 22:43:06.957662 | orchestrator | 2025-06-01 22:43:06.958548 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-01 22:43:06.961089 | orchestrator | 2025-06-01 22:43:06.962003 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 22:43:06.964126 | orchestrator | Sunday 01 June 2025 22:43:06 +0000 (0:00:01.894) 0:00:15.133 *********** 2025-06-01 22:43:07.198010 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-01 22:43:07.198578 | orchestrator | 2025-06-01 22:43:07.199367 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 22:43:07.199857 | orchestrator | Sunday 01 June 2025 22:43:07 +0000 (0:00:00.242) 0:00:15.375 *********** 2025-06-01 22:43:07.437500 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:43:07.438119 | orchestrator | 2025-06-01 22:43:07.438348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:07.440505 | orchestrator | Sunday 01 June 2025 22:43:07 +0000 (0:00:00.239) 0:00:15.615 *********** 2025-06-01 22:43:07.808675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-01 22:43:07.809309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-01 22:43:07.810114 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-01 22:43:07.815420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-01 22:43:07.818117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-01 22:43:07.819928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-01 22:43:07.821201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-01 22:43:07.822172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-01 22:43:07.823253 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-01 22:43:07.824952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-01 22:43:07.828633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-01 22:43:07.830319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-01 22:43:07.833026 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-01 22:43:07.833094 | orchestrator | 2025-06-01 22:43:07.834943 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:07.834967 | orchestrator | Sunday 01 June 2025 22:43:07 +0000 (0:00:00.372) 0:00:15.987 *********** 2025-06-01 22:43:08.016030 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:08.018107 | orchestrator | 2025-06-01 22:43:08.020489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:08.021819 | orchestrator | Sunday 01 June 2025 22:43:08 +0000 (0:00:00.204) 0:00:16.192 *********** 2025-06-01 22:43:08.226720 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:08.227421 | orchestrator | 2025-06-01 22:43:08.228377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:08.229651 | orchestrator | Sunday 01 June 2025 22:43:08 +0000 (0:00:00.212) 0:00:16.405 *********** 2025-06-01 22:43:08.431425 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:08.431676 | orchestrator | 2025-06-01 22:43:08.432989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:08.435415 | orchestrator | Sunday 01 June 2025 22:43:08 +0000 (0:00:00.204) 0:00:16.609 *********** 2025-06-01 22:43:08.622863 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:08.623852 | orchestrator | 2025-06-01 22:43:08.630154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:08.630182 | orchestrator | Sunday 01 June 2025 22:43:08 +0000 (0:00:00.190) 0:00:16.800 *********** 2025-06-01 22:43:09.274330 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:09.274528 | orchestrator | 2025-06-01 22:43:09.274547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:09.276374 | orchestrator | Sunday 01 June 2025 22:43:09 +0000 (0:00:00.648) 0:00:17.449 *********** 2025-06-01 22:43:09.467624 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:09.468833 | orchestrator | 2025-06-01 22:43:09.470718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:09.473509 | orchestrator | Sunday 01 June 2025 22:43:09 +0000 (0:00:00.195) 0:00:17.645 *********** 2025-06-01 22:43:09.674444 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:09.675965 | orchestrator | 2025-06-01 22:43:09.682363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:09.682462 | orchestrator | Sunday 01 June 2025 22:43:09 +0000 (0:00:00.207) 0:00:17.852 *********** 2025-06-01 22:43:09.875419 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:09.875608 | orchestrator | 2025-06-01 22:43:09.878337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:09.880136 | orchestrator | Sunday 01 June 2025 22:43:09 +0000 (0:00:00.201) 0:00:18.054 *********** 2025-06-01 22:43:10.288745 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66) 2025-06-01 22:43:10.292479 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66) 2025-06-01 22:43:10.292521 | orchestrator | 2025-06-01 22:43:10.292895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:10.294334 | orchestrator | Sunday 01 June 2025 22:43:10 +0000 (0:00:00.411) 0:00:18.465 *********** 2025-06-01 22:43:10.715480 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_540779ba-6163-469a-a896-cda4c9a0c816) 2025-06-01 22:43:10.716797 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_540779ba-6163-469a-a896-cda4c9a0c816) 2025-06-01 22:43:10.719108 | orchestrator | 2025-06-01 22:43:10.719142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:10.720112 | orchestrator | Sunday 01 June 2025 22:43:10 +0000 (0:00:00.426) 0:00:18.892 *********** 2025-06-01 22:43:11.117924 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a15d8421-e56a-4621-aed8-2eaa8f026081) 2025-06-01 22:43:11.120580 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a15d8421-e56a-4621-aed8-2eaa8f026081) 2025-06-01 22:43:11.122969 | orchestrator | 2025-06-01 22:43:11.124158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:11.125108 | orchestrator | Sunday 01 June 2025 22:43:11 +0000 (0:00:00.402) 0:00:19.294 *********** 2025-06-01 22:43:11.550853 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4) 2025-06-01 22:43:11.551018 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4) 2025-06-01 22:43:11.553551 | orchestrator | 2025-06-01 22:43:11.553574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:11.554343 | orchestrator | Sunday 01 June 2025 22:43:11 +0000 (0:00:00.431) 0:00:19.726 *********** 2025-06-01 22:43:11.873480 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 22:43:11.875556 | orchestrator | 2025-06-01 22:43:11.875591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:11.875652 | orchestrator | Sunday 01 June 2025 22:43:11 +0000 (0:00:00.322) 0:00:20.049 *********** 2025-06-01 22:43:12.254782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-01 22:43:12.255722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-01 22:43:12.256975 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-01 22:43:12.259384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-01 22:43:12.260021 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-01 22:43:12.260338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-01 22:43:12.260631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-01 22:43:12.261139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-01 22:43:12.264354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-01 22:43:12.264675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-01 22:43:12.265130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-01 22:43:12.265522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-01 22:43:12.265987 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-01 22:43:12.266371 | orchestrator | 2025-06-01 22:43:12.266649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:12.268325 | orchestrator | Sunday 01 June 2025 22:43:12 +0000 (0:00:00.381) 0:00:20.431 *********** 2025-06-01 22:43:12.463994 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:12.464106 | orchestrator | 2025-06-01 22:43:12.464327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:12.464699 | orchestrator | Sunday 01 June 2025 22:43:12 +0000 (0:00:00.211) 0:00:20.642 *********** 2025-06-01 22:43:13.157826 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:13.158922 | orchestrator | 2025-06-01 22:43:13.160156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:13.161233 | orchestrator | Sunday 01 June 2025 22:43:13 +0000 (0:00:00.692) 0:00:21.335 *********** 2025-06-01 22:43:13.371635 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:13.372051 | orchestrator | 2025-06-01 22:43:13.373238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:13.374283 | orchestrator | Sunday 01 June 2025 22:43:13 +0000 (0:00:00.215) 0:00:21.550 *********** 2025-06-01 22:43:13.589419 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:13.590774 | orchestrator | 2025-06-01 22:43:13.591912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:13.596413 | orchestrator | Sunday 01 June 2025 22:43:13 +0000 (0:00:00.217) 0:00:21.767 *********** 2025-06-01 22:43:13.799585 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:13.803933 | orchestrator | 2025-06-01 22:43:13.803975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:13.805555 | orchestrator | Sunday 01 June 2025 22:43:13 +0000 (0:00:00.210) 0:00:21.978 *********** 2025-06-01 22:43:14.015147 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:14.015603 | orchestrator | 2025-06-01 22:43:14.021023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:14.021266 | orchestrator | Sunday 01 June 2025 22:43:14 +0000 (0:00:00.214) 0:00:22.192 *********** 2025-06-01 22:43:14.223783 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:14.224336 | orchestrator | 2025-06-01 22:43:14.228642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:14.229325 | orchestrator | Sunday 01 June 2025 22:43:14 +0000 (0:00:00.208) 0:00:22.400 *********** 2025-06-01 22:43:14.414204 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:14.415066 | orchestrator | 2025-06-01 22:43:14.419535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:14.423193 | orchestrator | Sunday 01 June 2025 22:43:14 +0000 (0:00:00.189) 0:00:22.590 *********** 2025-06-01 22:43:15.063005 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-01 22:43:15.064438 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-01 22:43:15.065676 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-01 22:43:15.067398 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-01 22:43:15.068579 | orchestrator | 2025-06-01 22:43:15.071464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:15.075616 | orchestrator | Sunday 01 June 2025 22:43:15 +0000 (0:00:00.649) 0:00:23.240 *********** 2025-06-01 22:43:15.271148 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:15.271431 | orchestrator | 2025-06-01 22:43:15.276222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:15.278288 | orchestrator | Sunday 01 June 2025 22:43:15 +0000 (0:00:00.207) 0:00:23.448 *********** 2025-06-01 22:43:15.478304 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:15.479023 | orchestrator | 2025-06-01 22:43:15.479800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:15.483979 | orchestrator | Sunday 01 June 2025 22:43:15 +0000 (0:00:00.206) 0:00:23.655 *********** 2025-06-01 22:43:15.679545 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:15.680093 | orchestrator | 2025-06-01 22:43:15.681410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:15.684139 | orchestrator | Sunday 01 June 2025 22:43:15 +0000 (0:00:00.202) 0:00:23.857 *********** 2025-06-01 22:43:15.900649 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:15.902124 | orchestrator | 2025-06-01 22:43:15.903175 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-01 22:43:15.903991 | orchestrator | Sunday 01 June 2025 22:43:15 +0000 (0:00:00.222) 0:00:24.079 *********** 2025-06-01 22:43:16.272306 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-01 22:43:16.272901 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-01 22:43:16.276571 | orchestrator | 2025-06-01 22:43:16.278086 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-01 22:43:16.278460 | orchestrator | Sunday 01 June 2025 22:43:16 +0000 (0:00:00.368) 0:00:24.448 *********** 2025-06-01 22:43:16.417367 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:16.418537 | orchestrator | 2025-06-01 22:43:16.420532 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-01 22:43:16.421332 | orchestrator | Sunday 01 June 2025 22:43:16 +0000 (0:00:00.146) 0:00:24.595 *********** 2025-06-01 22:43:16.554931 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:16.555029 | orchestrator | 2025-06-01 22:43:16.555591 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-01 22:43:16.556648 | orchestrator | Sunday 01 June 2025 22:43:16 +0000 (0:00:00.137) 0:00:24.732 *********** 2025-06-01 22:43:16.702364 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:16.703295 | orchestrator | 2025-06-01 22:43:16.704106 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-01 22:43:16.704920 | orchestrator | Sunday 01 June 2025 22:43:16 +0000 (0:00:00.148) 0:00:24.880 *********** 2025-06-01 22:43:16.847505 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:43:16.849220 | orchestrator | 2025-06-01 22:43:16.849268 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-01 22:43:16.849347 | orchestrator | Sunday 01 June 2025 22:43:16 +0000 (0:00:00.142) 0:00:25.023 *********** 2025-06-01 22:43:17.034790 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '656e26cc-5762-5518-9587-501a37b6e3ae'}}) 2025-06-01 22:43:17.035408 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'}}) 2025-06-01 22:43:17.044239 | orchestrator | 2025-06-01 22:43:17.044294 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-01 22:43:17.044309 | orchestrator | Sunday 01 June 2025 22:43:17 +0000 (0:00:00.187) 0:00:25.211 *********** 2025-06-01 22:43:17.180106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '656e26cc-5762-5518-9587-501a37b6e3ae'}})  2025-06-01 22:43:17.181347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'}})  2025-06-01 22:43:17.182802 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:17.187572 | orchestrator | 2025-06-01 22:43:17.188775 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-01 22:43:17.189658 | orchestrator | Sunday 01 June 2025 22:43:17 +0000 (0:00:00.143) 0:00:25.355 *********** 2025-06-01 22:43:17.323258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '656e26cc-5762-5518-9587-501a37b6e3ae'}})  2025-06-01 22:43:17.323966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'}})  2025-06-01 22:43:17.325659 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:17.327152 | orchestrator | 2025-06-01 22:43:17.328177 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-01 22:43:17.329184 | orchestrator | Sunday 01 June 2025 22:43:17 +0000 (0:00:00.146) 0:00:25.501 *********** 2025-06-01 22:43:17.489208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '656e26cc-5762-5518-9587-501a37b6e3ae'}})  2025-06-01 22:43:17.490387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'}})  2025-06-01 22:43:17.492215 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:17.492820 | orchestrator | 2025-06-01 22:43:17.494377 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-01 22:43:17.495299 | orchestrator | Sunday 01 June 2025 22:43:17 +0000 (0:00:00.165) 0:00:25.667 *********** 2025-06-01 22:43:17.629999 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:43:17.630157 | orchestrator | 2025-06-01 22:43:17.631750 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-01 22:43:17.635136 | orchestrator | Sunday 01 June 2025 22:43:17 +0000 (0:00:00.140) 0:00:25.808 *********** 2025-06-01 22:43:17.770467 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:43:17.772458 | orchestrator | 2025-06-01 22:43:17.774090 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-01 22:43:17.775710 | orchestrator | Sunday 01 June 2025 22:43:17 +0000 (0:00:00.140) 0:00:25.949 *********** 2025-06-01 22:43:17.918674 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:17.918763 | orchestrator | 2025-06-01 22:43:17.921544 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-01 22:43:17.921826 | orchestrator | Sunday 01 June 2025 22:43:17 +0000 (0:00:00.147) 0:00:26.096 *********** 2025-06-01 22:43:18.258997 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:18.259813 | orchestrator | 2025-06-01 22:43:18.259944 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-01 22:43:18.260499 | orchestrator | Sunday 01 June 2025 22:43:18 +0000 (0:00:00.334) 0:00:26.430 *********** 2025-06-01 22:43:18.394473 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:18.394560 | orchestrator | 2025-06-01 22:43:18.395316 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-01 22:43:18.397640 | orchestrator | Sunday 01 June 2025 22:43:18 +0000 (0:00:00.139) 0:00:26.570 *********** 2025-06-01 22:43:18.540766 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 22:43:18.810435 | orchestrator |  "ceph_osd_devices": { 2025-06-01 22:43:18.810509 | orchestrator |  "sdb": { 2025-06-01 22:43:18.810527 | orchestrator |  "osd_lvm_uuid": "656e26cc-5762-5518-9587-501a37b6e3ae" 2025-06-01 22:43:18.810540 | orchestrator |  }, 2025-06-01 22:43:18.810552 | orchestrator |  "sdc": { 2025-06-01 22:43:18.810563 | orchestrator |  "osd_lvm_uuid": "154be1eb-c9a2-50db-b9e4-8c9f064a0b1c" 2025-06-01 22:43:18.810575 | orchestrator |  } 2025-06-01 22:43:18.810586 | orchestrator |  } 2025-06-01 22:43:18.810597 | orchestrator | } 2025-06-01 22:43:18.810609 | orchestrator | 2025-06-01 22:43:18.810620 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-01 22:43:18.810632 | orchestrator | Sunday 01 June 2025 22:43:18 +0000 (0:00:00.147) 0:00:26.717 *********** 2025-06-01 22:43:18.810643 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:18.810654 | orchestrator | 2025-06-01 22:43:18.810666 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-01 22:43:18.810677 | orchestrator | Sunday 01 June 2025 22:43:18 +0000 (0:00:00.141) 0:00:26.859 *********** 2025-06-01 22:43:18.820581 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:18.822411 | orchestrator | 2025-06-01 22:43:18.824666 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-01 22:43:18.825550 | orchestrator | Sunday 01 June 2025 22:43:18 +0000 (0:00:00.140) 0:00:26.999 *********** 2025-06-01 22:43:19.000621 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:43:19.001417 | orchestrator | 2025-06-01 22:43:19.010436 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-01 22:43:19.010474 | orchestrator | Sunday 01 June 2025 22:43:18 +0000 (0:00:00.176) 0:00:27.176 *********** 2025-06-01 22:43:19.217834 | orchestrator | changed: [testbed-node-4] => { 2025-06-01 22:43:19.218326 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-01 22:43:19.221566 | orchestrator |  "ceph_osd_devices": { 2025-06-01 22:43:19.223064 | orchestrator |  "sdb": { 2025-06-01 22:43:19.224387 | orchestrator |  "osd_lvm_uuid": "656e26cc-5762-5518-9587-501a37b6e3ae" 2025-06-01 22:43:19.224620 | orchestrator |  }, 2025-06-01 22:43:19.225425 | orchestrator |  "sdc": { 2025-06-01 22:43:19.228719 | orchestrator |  "osd_lvm_uuid": "154be1eb-c9a2-50db-b9e4-8c9f064a0b1c" 2025-06-01 22:43:19.230745 | orchestrator |  } 2025-06-01 22:43:19.231056 | orchestrator |  }, 2025-06-01 22:43:19.232772 | orchestrator |  "lvm_volumes": [ 2025-06-01 22:43:19.234058 | orchestrator |  { 2025-06-01 22:43:19.235640 | orchestrator |  "data": "osd-block-656e26cc-5762-5518-9587-501a37b6e3ae", 2025-06-01 22:43:19.236761 | orchestrator |  "data_vg": "ceph-656e26cc-5762-5518-9587-501a37b6e3ae" 2025-06-01 22:43:19.238132 | orchestrator |  }, 2025-06-01 22:43:19.239105 | orchestrator |  { 2025-06-01 22:43:19.239710 | orchestrator |  "data": "osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c", 2025-06-01 22:43:19.240692 | orchestrator |  "data_vg": "ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c" 2025-06-01 22:43:19.241438 | orchestrator |  } 2025-06-01 22:43:19.242553 | orchestrator |  ] 2025-06-01 22:43:19.243368 | orchestrator |  } 2025-06-01 22:43:19.244579 | orchestrator | } 2025-06-01 22:43:19.245392 | orchestrator | 2025-06-01 22:43:19.246193 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-01 22:43:19.246666 | orchestrator | Sunday 01 June 2025 22:43:19 +0000 (0:00:00.215) 0:00:27.391 *********** 2025-06-01 22:43:20.357770 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-01 22:43:20.357941 | orchestrator | 2025-06-01 22:43:20.358067 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-01 22:43:20.359633 | orchestrator | 2025-06-01 22:43:20.359654 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 22:43:20.359666 | orchestrator | Sunday 01 June 2025 22:43:20 +0000 (0:00:01.141) 0:00:28.533 *********** 2025-06-01 22:43:20.855947 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-01 22:43:20.856908 | orchestrator | 2025-06-01 22:43:20.858348 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 22:43:20.859598 | orchestrator | Sunday 01 June 2025 22:43:20 +0000 (0:00:00.500) 0:00:29.033 *********** 2025-06-01 22:43:21.591518 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:43:21.592275 | orchestrator | 2025-06-01 22:43:21.593076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:21.594229 | orchestrator | Sunday 01 June 2025 22:43:21 +0000 (0:00:00.734) 0:00:29.768 *********** 2025-06-01 22:43:22.060562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-01 22:43:22.062372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-01 22:43:22.064301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-01 22:43:22.065098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-01 22:43:22.066222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-01 22:43:22.071741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-01 22:43:22.072676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-01 22:43:22.073377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-01 22:43:22.074256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-01 22:43:22.075505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-01 22:43:22.077485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-01 22:43:22.079113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-01 22:43:22.080154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-01 22:43:22.080894 | orchestrator | 2025-06-01 22:43:22.082097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:22.083285 | orchestrator | Sunday 01 June 2025 22:43:22 +0000 (0:00:00.469) 0:00:30.238 *********** 2025-06-01 22:43:22.291749 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:22.292853 | orchestrator | 2025-06-01 22:43:22.294681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:22.296352 | orchestrator | Sunday 01 June 2025 22:43:22 +0000 (0:00:00.231) 0:00:30.469 *********** 2025-06-01 22:43:22.492925 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:22.493048 | orchestrator | 2025-06-01 22:43:22.497590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:22.497629 | orchestrator | Sunday 01 June 2025 22:43:22 +0000 (0:00:00.200) 0:00:30.670 *********** 2025-06-01 22:43:22.727094 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:22.727204 | orchestrator | 2025-06-01 22:43:22.727583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:22.727961 | orchestrator | Sunday 01 June 2025 22:43:22 +0000 (0:00:00.235) 0:00:30.905 *********** 2025-06-01 22:43:22.948952 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:22.949059 | orchestrator | 2025-06-01 22:43:22.949071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:22.949083 | orchestrator | Sunday 01 June 2025 22:43:22 +0000 (0:00:00.213) 0:00:31.118 *********** 2025-06-01 22:43:23.144889 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:23.145749 | orchestrator | 2025-06-01 22:43:23.146938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:23.149176 | orchestrator | Sunday 01 June 2025 22:43:23 +0000 (0:00:00.204) 0:00:31.323 *********** 2025-06-01 22:43:23.387103 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:23.387286 | orchestrator | 2025-06-01 22:43:23.390457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:23.390500 | orchestrator | Sunday 01 June 2025 22:43:23 +0000 (0:00:00.239) 0:00:31.563 *********** 2025-06-01 22:43:23.567685 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:23.567916 | orchestrator | 2025-06-01 22:43:23.569612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:23.569673 | orchestrator | Sunday 01 June 2025 22:43:23 +0000 (0:00:00.182) 0:00:31.746 *********** 2025-06-01 22:43:23.782940 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:23.783101 | orchestrator | 2025-06-01 22:43:23.783986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:23.784424 | orchestrator | Sunday 01 June 2025 22:43:23 +0000 (0:00:00.215) 0:00:31.961 *********** 2025-06-01 22:43:24.453194 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b) 2025-06-01 22:43:24.454134 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b) 2025-06-01 22:43:24.455766 | orchestrator | 2025-06-01 22:43:24.456837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:24.457428 | orchestrator | Sunday 01 June 2025 22:43:24 +0000 (0:00:00.667) 0:00:32.629 *********** 2025-06-01 22:43:25.318247 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8eb07f49-902f-451e-9ead-836ebd4b9d37) 2025-06-01 22:43:25.318518 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8eb07f49-902f-451e-9ead-836ebd4b9d37) 2025-06-01 22:43:25.319488 | orchestrator | 2025-06-01 22:43:25.321672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:25.322414 | orchestrator | Sunday 01 June 2025 22:43:25 +0000 (0:00:00.865) 0:00:33.495 *********** 2025-06-01 22:43:25.799798 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_44465191-0fa1-4c22-9234-5804ca50669c) 2025-06-01 22:43:25.799934 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_44465191-0fa1-4c22-9234-5804ca50669c) 2025-06-01 22:43:25.800983 | orchestrator | 2025-06-01 22:43:25.802314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:25.803812 | orchestrator | Sunday 01 June 2025 22:43:25 +0000 (0:00:00.481) 0:00:33.976 *********** 2025-06-01 22:43:26.224286 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a931087f-71b6-44f2-a559-c8deb4b3c146) 2025-06-01 22:43:26.225168 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a931087f-71b6-44f2-a559-c8deb4b3c146) 2025-06-01 22:43:26.225982 | orchestrator | 2025-06-01 22:43:26.227296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:43:26.228549 | orchestrator | Sunday 01 June 2025 22:43:26 +0000 (0:00:00.426) 0:00:34.402 *********** 2025-06-01 22:43:26.566668 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 22:43:26.567154 | orchestrator | 2025-06-01 22:43:26.568092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:26.570123 | orchestrator | Sunday 01 June 2025 22:43:26 +0000 (0:00:00.341) 0:00:34.744 *********** 2025-06-01 22:43:26.956607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-01 22:43:26.957118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-01 22:43:26.958113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-01 22:43:26.959304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-01 22:43:26.960912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-01 22:43:26.960934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-01 22:43:26.961560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-01 22:43:26.961900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-01 22:43:26.962494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-01 22:43:26.963249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-01 22:43:26.963998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-01 22:43:26.964806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-01 22:43:26.965176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-01 22:43:26.965577 | orchestrator | 2025-06-01 22:43:26.966268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:26.966335 | orchestrator | Sunday 01 June 2025 22:43:26 +0000 (0:00:00.390) 0:00:35.135 *********** 2025-06-01 22:43:27.155221 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:27.155403 | orchestrator | 2025-06-01 22:43:27.155540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:27.155948 | orchestrator | Sunday 01 June 2025 22:43:27 +0000 (0:00:00.198) 0:00:35.333 *********** 2025-06-01 22:43:27.366421 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:27.367365 | orchestrator | 2025-06-01 22:43:27.367576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:27.371027 | orchestrator | Sunday 01 June 2025 22:43:27 +0000 (0:00:00.208) 0:00:35.542 *********** 2025-06-01 22:43:27.570712 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:27.571604 | orchestrator | 2025-06-01 22:43:27.574182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:27.575968 | orchestrator | Sunday 01 June 2025 22:43:27 +0000 (0:00:00.206) 0:00:35.748 *********** 2025-06-01 22:43:27.780930 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:27.781795 | orchestrator | 2025-06-01 22:43:27.783035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:27.783793 | orchestrator | Sunday 01 June 2025 22:43:27 +0000 (0:00:00.209) 0:00:35.958 *********** 2025-06-01 22:43:27.983051 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:27.983498 | orchestrator | 2025-06-01 22:43:27.984130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:27.986448 | orchestrator | Sunday 01 June 2025 22:43:27 +0000 (0:00:00.201) 0:00:36.159 *********** 2025-06-01 22:43:28.666310 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:28.666541 | orchestrator | 2025-06-01 22:43:28.667680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:28.669129 | orchestrator | Sunday 01 June 2025 22:43:28 +0000 (0:00:00.683) 0:00:36.843 *********** 2025-06-01 22:43:28.888336 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:28.889155 | orchestrator | 2025-06-01 22:43:28.890277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:28.891169 | orchestrator | Sunday 01 June 2025 22:43:28 +0000 (0:00:00.221) 0:00:37.064 *********** 2025-06-01 22:43:29.093566 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:29.093708 | orchestrator | 2025-06-01 22:43:29.093775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:29.095050 | orchestrator | Sunday 01 June 2025 22:43:29 +0000 (0:00:00.205) 0:00:37.270 *********** 2025-06-01 22:43:29.750525 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-01 22:43:29.751045 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-01 22:43:29.752066 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-01 22:43:29.752842 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-01 22:43:29.753900 | orchestrator | 2025-06-01 22:43:29.755287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:29.756074 | orchestrator | Sunday 01 June 2025 22:43:29 +0000 (0:00:00.657) 0:00:37.927 *********** 2025-06-01 22:43:29.951926 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:29.952482 | orchestrator | 2025-06-01 22:43:29.953664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:29.954697 | orchestrator | Sunday 01 June 2025 22:43:29 +0000 (0:00:00.202) 0:00:38.129 *********** 2025-06-01 22:43:30.156066 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:30.156442 | orchestrator | 2025-06-01 22:43:30.157306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:30.159103 | orchestrator | Sunday 01 June 2025 22:43:30 +0000 (0:00:00.204) 0:00:38.334 *********** 2025-06-01 22:43:30.360943 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:30.362669 | orchestrator | 2025-06-01 22:43:30.363815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:43:30.364650 | orchestrator | Sunday 01 June 2025 22:43:30 +0000 (0:00:00.205) 0:00:38.539 *********** 2025-06-01 22:43:30.600549 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:30.601323 | orchestrator | 2025-06-01 22:43:30.602166 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-01 22:43:30.603732 | orchestrator | Sunday 01 June 2025 22:43:30 +0000 (0:00:00.238) 0:00:38.777 *********** 2025-06-01 22:43:30.776906 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-01 22:43:30.778411 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-01 22:43:30.780677 | orchestrator | 2025-06-01 22:43:30.780759 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-01 22:43:30.781708 | orchestrator | Sunday 01 June 2025 22:43:30 +0000 (0:00:00.177) 0:00:38.955 *********** 2025-06-01 22:43:30.916698 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:30.918265 | orchestrator | 2025-06-01 22:43:30.920353 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-01 22:43:30.921026 | orchestrator | Sunday 01 June 2025 22:43:30 +0000 (0:00:00.139) 0:00:39.095 *********** 2025-06-01 22:43:31.054825 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:31.056168 | orchestrator | 2025-06-01 22:43:31.056966 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-01 22:43:31.058772 | orchestrator | Sunday 01 June 2025 22:43:31 +0000 (0:00:00.138) 0:00:39.233 *********** 2025-06-01 22:43:31.193083 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:31.194232 | orchestrator | 2025-06-01 22:43:31.195442 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-01 22:43:31.196791 | orchestrator | Sunday 01 June 2025 22:43:31 +0000 (0:00:00.137) 0:00:39.371 *********** 2025-06-01 22:43:31.584726 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:43:31.586700 | orchestrator | 2025-06-01 22:43:31.587542 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-01 22:43:31.588462 | orchestrator | Sunday 01 June 2025 22:43:31 +0000 (0:00:00.389) 0:00:39.760 *********** 2025-06-01 22:43:31.769898 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83360607-213f-5c54-ae9b-aa580894d048'}}) 2025-06-01 22:43:31.770175 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c033fef4-2688-55e0-9ca7-53dbc156bc4e'}}) 2025-06-01 22:43:31.770905 | orchestrator | 2025-06-01 22:43:31.771415 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-01 22:43:31.772193 | orchestrator | Sunday 01 June 2025 22:43:31 +0000 (0:00:00.188) 0:00:39.949 *********** 2025-06-01 22:43:31.939924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83360607-213f-5c54-ae9b-aa580894d048'}})  2025-06-01 22:43:31.942607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c033fef4-2688-55e0-9ca7-53dbc156bc4e'}})  2025-06-01 22:43:31.944047 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:31.945023 | orchestrator | 2025-06-01 22:43:31.946751 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-01 22:43:31.947582 | orchestrator | Sunday 01 June 2025 22:43:31 +0000 (0:00:00.169) 0:00:40.119 *********** 2025-06-01 22:43:32.113232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83360607-213f-5c54-ae9b-aa580894d048'}})  2025-06-01 22:43:32.114286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c033fef4-2688-55e0-9ca7-53dbc156bc4e'}})  2025-06-01 22:43:32.115136 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:32.116317 | orchestrator | 2025-06-01 22:43:32.118897 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-01 22:43:32.119747 | orchestrator | Sunday 01 June 2025 22:43:32 +0000 (0:00:00.172) 0:00:40.291 *********** 2025-06-01 22:43:32.270118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83360607-213f-5c54-ae9b-aa580894d048'}})  2025-06-01 22:43:32.270323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c033fef4-2688-55e0-9ca7-53dbc156bc4e'}})  2025-06-01 22:43:32.272738 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:32.274061 | orchestrator | 2025-06-01 22:43:32.275269 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-01 22:43:32.276089 | orchestrator | Sunday 01 June 2025 22:43:32 +0000 (0:00:00.156) 0:00:40.447 *********** 2025-06-01 22:43:32.431084 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:43:32.431512 | orchestrator | 2025-06-01 22:43:32.433105 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-01 22:43:32.433681 | orchestrator | Sunday 01 June 2025 22:43:32 +0000 (0:00:00.161) 0:00:40.609 *********** 2025-06-01 22:43:32.586168 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:43:32.586666 | orchestrator | 2025-06-01 22:43:32.587727 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-01 22:43:32.589122 | orchestrator | Sunday 01 June 2025 22:43:32 +0000 (0:00:00.152) 0:00:40.762 *********** 2025-06-01 22:43:32.734560 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:32.735458 | orchestrator | 2025-06-01 22:43:32.736431 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-01 22:43:32.736933 | orchestrator | Sunday 01 June 2025 22:43:32 +0000 (0:00:00.151) 0:00:40.914 *********** 2025-06-01 22:43:32.873701 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:32.873812 | orchestrator | 2025-06-01 22:43:32.874727 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-01 22:43:32.878201 | orchestrator | Sunday 01 June 2025 22:43:32 +0000 (0:00:00.138) 0:00:41.052 *********** 2025-06-01 22:43:33.014949 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:33.015980 | orchestrator | 2025-06-01 22:43:33.017627 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-01 22:43:33.017736 | orchestrator | Sunday 01 June 2025 22:43:33 +0000 (0:00:00.139) 0:00:41.191 *********** 2025-06-01 22:43:33.183126 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 22:43:33.183321 | orchestrator |  "ceph_osd_devices": { 2025-06-01 22:43:33.183709 | orchestrator |  "sdb": { 2025-06-01 22:43:33.184420 | orchestrator |  "osd_lvm_uuid": "83360607-213f-5c54-ae9b-aa580894d048" 2025-06-01 22:43:33.185831 | orchestrator |  }, 2025-06-01 22:43:33.186746 | orchestrator |  "sdc": { 2025-06-01 22:43:33.186770 | orchestrator |  "osd_lvm_uuid": "c033fef4-2688-55e0-9ca7-53dbc156bc4e" 2025-06-01 22:43:33.187411 | orchestrator |  } 2025-06-01 22:43:33.188096 | orchestrator |  } 2025-06-01 22:43:33.188379 | orchestrator | } 2025-06-01 22:43:33.188946 | orchestrator | 2025-06-01 22:43:33.189377 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-01 22:43:33.189901 | orchestrator | Sunday 01 June 2025 22:43:33 +0000 (0:00:00.169) 0:00:41.361 *********** 2025-06-01 22:43:33.309510 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:33.310693 | orchestrator | 2025-06-01 22:43:33.311581 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-01 22:43:33.313242 | orchestrator | Sunday 01 June 2025 22:43:33 +0000 (0:00:00.126) 0:00:41.488 *********** 2025-06-01 22:43:33.752583 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:33.753645 | orchestrator | 2025-06-01 22:43:33.754235 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-01 22:43:33.755157 | orchestrator | Sunday 01 June 2025 22:43:33 +0000 (0:00:00.442) 0:00:41.930 *********** 2025-06-01 22:43:33.903762 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:43:33.905104 | orchestrator | 2025-06-01 22:43:33.906446 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-01 22:43:33.907782 | orchestrator | Sunday 01 June 2025 22:43:33 +0000 (0:00:00.151) 0:00:42.082 *********** 2025-06-01 22:43:34.144767 | orchestrator | changed: [testbed-node-5] => { 2025-06-01 22:43:34.146208 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-01 22:43:34.149104 | orchestrator |  "ceph_osd_devices": { 2025-06-01 22:43:34.150129 | orchestrator |  "sdb": { 2025-06-01 22:43:34.151106 | orchestrator |  "osd_lvm_uuid": "83360607-213f-5c54-ae9b-aa580894d048" 2025-06-01 22:43:34.152100 | orchestrator |  }, 2025-06-01 22:43:34.152933 | orchestrator |  "sdc": { 2025-06-01 22:43:34.153939 | orchestrator |  "osd_lvm_uuid": "c033fef4-2688-55e0-9ca7-53dbc156bc4e" 2025-06-01 22:43:34.154402 | orchestrator |  } 2025-06-01 22:43:34.155297 | orchestrator |  }, 2025-06-01 22:43:34.155766 | orchestrator |  "lvm_volumes": [ 2025-06-01 22:43:34.157036 | orchestrator |  { 2025-06-01 22:43:34.157405 | orchestrator |  "data": "osd-block-83360607-213f-5c54-ae9b-aa580894d048", 2025-06-01 22:43:34.158287 | orchestrator |  "data_vg": "ceph-83360607-213f-5c54-ae9b-aa580894d048" 2025-06-01 22:43:34.158759 | orchestrator |  }, 2025-06-01 22:43:34.159640 | orchestrator |  { 2025-06-01 22:43:34.160231 | orchestrator |  "data": "osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e", 2025-06-01 22:43:34.161051 | orchestrator |  "data_vg": "ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e" 2025-06-01 22:43:34.161467 | orchestrator |  } 2025-06-01 22:43:34.162139 | orchestrator |  ] 2025-06-01 22:43:34.162353 | orchestrator |  } 2025-06-01 22:43:34.162788 | orchestrator | } 2025-06-01 22:43:34.163254 | orchestrator | 2025-06-01 22:43:34.163491 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-01 22:43:34.163918 | orchestrator | Sunday 01 June 2025 22:43:34 +0000 (0:00:00.240) 0:00:42.323 *********** 2025-06-01 22:43:35.145612 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-01 22:43:35.147446 | orchestrator | 2025-06-01 22:43:35.148908 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:43:35.149335 | orchestrator | 2025-06-01 22:43:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:43:35.150107 | orchestrator | 2025-06-01 22:43:35 | INFO  | Please wait and do not abort execution. 2025-06-01 22:43:35.151910 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-01 22:43:35.153324 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-01 22:43:35.154297 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-01 22:43:35.156221 | orchestrator | 2025-06-01 22:43:35.157090 | orchestrator | 2025-06-01 22:43:35.157967 | orchestrator | 2025-06-01 22:43:35.159121 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:43:35.160078 | orchestrator | Sunday 01 June 2025 22:43:35 +0000 (0:00:00.998) 0:00:43.321 *********** 2025-06-01 22:43:35.161150 | orchestrator | =============================================================================== 2025-06-01 22:43:35.162092 | orchestrator | Write configuration file ------------------------------------------------ 4.03s 2025-06-01 22:43:35.163303 | orchestrator | Add known links to the list of available block devices ------------------ 1.33s 2025-06-01 22:43:35.164207 | orchestrator | Get initial list of available block devices ----------------------------- 1.32s 2025-06-01 22:43:35.165252 | orchestrator | Add known partitions to the list of available block devices ------------- 1.28s 2025-06-01 22:43:35.166013 | orchestrator | Add known partitions to the list of available block devices ------------- 1.16s 2025-06-01 22:43:35.166633 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.04s 2025-06-01 22:43:35.167504 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2025-06-01 22:43:35.168113 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2025-06-01 22:43:35.169803 | orchestrator | Print DB devices -------------------------------------------------------- 0.69s 2025-06-01 22:43:35.170003 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-06-01 22:43:35.171301 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.69s 2025-06-01 22:43:35.171573 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-06-01 22:43:35.172267 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-06-01 22:43:35.173240 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-06-01 22:43:35.174918 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-06-01 22:43:35.175218 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-06-01 22:43:35.176172 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-06-01 22:43:35.176698 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.64s 2025-06-01 22:43:35.177773 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-06-01 22:43:35.178544 | orchestrator | Print configuration data ------------------------------------------------ 0.61s 2025-06-01 22:43:47.664397 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:43:47.664541 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:43:47.664557 | orchestrator | Registering Redlock._release_script 2025-06-01 22:43:47.720335 | orchestrator | 2025-06-01 22:43:47 | INFO  | Task 434cab63-3a6e-4d56-bd43-819b91fc4416 (sync inventory) is running in background. Output coming soon. 2025-06-01 22:44:34.103671 | orchestrator | 2025-06-01 22:44:16 | INFO  | Starting group_vars file reorganization 2025-06-01 22:44:34.103843 | orchestrator | 2025-06-01 22:44:16 | INFO  | Moved 0 file(s) to their respective directories 2025-06-01 22:44:34.103862 | orchestrator | 2025-06-01 22:44:16 | INFO  | Group_vars file reorganization completed 2025-06-01 22:44:34.103874 | orchestrator | 2025-06-01 22:44:18 | INFO  | Starting variable preparation from inventory 2025-06-01 22:44:34.103886 | orchestrator | 2025-06-01 22:44:19 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-01 22:44:34.103898 | orchestrator | 2025-06-01 22:44:19 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-01 22:44:34.103936 | orchestrator | 2025-06-01 22:44:19 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-01 22:44:34.103948 | orchestrator | 2025-06-01 22:44:19 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-01 22:44:34.103959 | orchestrator | 2025-06-01 22:44:19 | INFO  | Variable preparation completed: 2025-06-01 22:44:34.103970 | orchestrator | 2025-06-01 22:44:20 | INFO  | Starting inventory overwrite handling 2025-06-01 22:44:34.103981 | orchestrator | 2025-06-01 22:44:20 | INFO  | Handling group overwrites in 99-overwrite 2025-06-01 22:44:34.103993 | orchestrator | 2025-06-01 22:44:20 | INFO  | Removing group frr:children from 60-generic 2025-06-01 22:44:34.104003 | orchestrator | 2025-06-01 22:44:20 | INFO  | Removing group storage:children from 50-kolla 2025-06-01 22:44:34.104014 | orchestrator | 2025-06-01 22:44:20 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-01 22:44:34.104036 | orchestrator | 2025-06-01 22:44:20 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-01 22:44:34.104048 | orchestrator | 2025-06-01 22:44:20 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-01 22:44:34.104059 | orchestrator | 2025-06-01 22:44:20 | INFO  | Handling group overwrites in 20-roles 2025-06-01 22:44:34.104070 | orchestrator | 2025-06-01 22:44:20 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-01 22:44:34.104081 | orchestrator | 2025-06-01 22:44:20 | INFO  | Removed 6 group(s) in total 2025-06-01 22:44:34.104092 | orchestrator | 2025-06-01 22:44:20 | INFO  | Inventory overwrite handling completed 2025-06-01 22:44:34.104103 | orchestrator | 2025-06-01 22:44:21 | INFO  | Starting merge of inventory files 2025-06-01 22:44:34.104114 | orchestrator | 2025-06-01 22:44:21 | INFO  | Inventory files merged successfully 2025-06-01 22:44:34.104125 | orchestrator | 2025-06-01 22:44:25 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-01 22:44:34.104136 | orchestrator | 2025-06-01 22:44:32 | INFO  | Successfully wrote ClusterShell configuration 2025-06-01 22:44:36.149974 | orchestrator | 2025-06-01 22:44:36 | INFO  | Task 5e3ae86b-9089-4c2f-9980-e67a9f0d975a (ceph-create-lvm-devices) was prepared for execution. 2025-06-01 22:44:36.150167 | orchestrator | 2025-06-01 22:44:36 | INFO  | It takes a moment until task 5e3ae86b-9089-4c2f-9980-e67a9f0d975a (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-01 22:44:40.457591 | orchestrator | 2025-06-01 22:44:40.457822 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-01 22:44:40.461037 | orchestrator | 2025-06-01 22:44:40.461581 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 22:44:40.462291 | orchestrator | Sunday 01 June 2025 22:44:40 +0000 (0:00:00.318) 0:00:00.318 *********** 2025-06-01 22:44:40.690528 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 22:44:40.691476 | orchestrator | 2025-06-01 22:44:40.692668 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 22:44:40.693997 | orchestrator | Sunday 01 June 2025 22:44:40 +0000 (0:00:00.236) 0:00:00.555 *********** 2025-06-01 22:44:40.916402 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:44:40.916477 | orchestrator | 2025-06-01 22:44:40.916493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:40.916917 | orchestrator | Sunday 01 June 2025 22:44:40 +0000 (0:00:00.225) 0:00:00.781 *********** 2025-06-01 22:44:41.318438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-01 22:44:41.321013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-01 22:44:41.321669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-01 22:44:41.324428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-01 22:44:41.325034 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-01 22:44:41.326250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-01 22:44:41.327003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-01 22:44:41.327218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-01 22:44:41.327638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-01 22:44:41.327944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-01 22:44:41.328273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-01 22:44:41.328684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-01 22:44:41.329155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-01 22:44:41.329425 | orchestrator | 2025-06-01 22:44:41.329918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:41.330173 | orchestrator | Sunday 01 June 2025 22:44:41 +0000 (0:00:00.401) 0:00:01.182 *********** 2025-06-01 22:44:41.785856 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:41.786190 | orchestrator | 2025-06-01 22:44:41.786891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:41.787774 | orchestrator | Sunday 01 June 2025 22:44:41 +0000 (0:00:00.465) 0:00:01.648 *********** 2025-06-01 22:44:41.982715 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:41.982743 | orchestrator | 2025-06-01 22:44:41.983446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:41.985221 | orchestrator | Sunday 01 June 2025 22:44:41 +0000 (0:00:00.198) 0:00:01.846 *********** 2025-06-01 22:44:42.180350 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:42.181104 | orchestrator | 2025-06-01 22:44:42.182083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:42.184010 | orchestrator | Sunday 01 June 2025 22:44:42 +0000 (0:00:00.199) 0:00:02.045 *********** 2025-06-01 22:44:42.373257 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:42.373440 | orchestrator | 2025-06-01 22:44:42.374622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:42.375379 | orchestrator | Sunday 01 June 2025 22:44:42 +0000 (0:00:00.192) 0:00:02.238 *********** 2025-06-01 22:44:42.588618 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:42.590462 | orchestrator | 2025-06-01 22:44:42.590936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:42.592363 | orchestrator | Sunday 01 June 2025 22:44:42 +0000 (0:00:00.213) 0:00:02.452 *********** 2025-06-01 22:44:42.789763 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:42.790122 | orchestrator | 2025-06-01 22:44:42.791200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:42.792187 | orchestrator | Sunday 01 June 2025 22:44:42 +0000 (0:00:00.203) 0:00:02.655 *********** 2025-06-01 22:44:42.999095 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:43.000356 | orchestrator | 2025-06-01 22:44:43.002960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:43.002986 | orchestrator | Sunday 01 June 2025 22:44:42 +0000 (0:00:00.208) 0:00:02.864 *********** 2025-06-01 22:44:43.189108 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:43.189913 | orchestrator | 2025-06-01 22:44:43.191629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:43.192672 | orchestrator | Sunday 01 June 2025 22:44:43 +0000 (0:00:00.189) 0:00:03.053 *********** 2025-06-01 22:44:43.623534 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65) 2025-06-01 22:44:43.624409 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65) 2025-06-01 22:44:43.625488 | orchestrator | 2025-06-01 22:44:43.626470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:43.626542 | orchestrator | Sunday 01 June 2025 22:44:43 +0000 (0:00:00.434) 0:00:03.488 *********** 2025-06-01 22:44:44.028212 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e3d9d8cc-8358-4e9f-a548-9ae6b89fa066) 2025-06-01 22:44:44.031678 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e3d9d8cc-8358-4e9f-a548-9ae6b89fa066) 2025-06-01 22:44:44.033181 | orchestrator | 2025-06-01 22:44:44.033263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:44.033919 | orchestrator | Sunday 01 June 2025 22:44:44 +0000 (0:00:00.402) 0:00:03.890 *********** 2025-06-01 22:44:44.663505 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bed9961c-b7ee-4957-bf35-2fee53571a5a) 2025-06-01 22:44:44.666376 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bed9961c-b7ee-4957-bf35-2fee53571a5a) 2025-06-01 22:44:44.666434 | orchestrator | 2025-06-01 22:44:44.666459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:44.666517 | orchestrator | Sunday 01 June 2025 22:44:44 +0000 (0:00:00.636) 0:00:04.526 *********** 2025-06-01 22:44:45.308962 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6d04dff8-74fe-4097-ace0-4c437e5e0f9f) 2025-06-01 22:44:45.309723 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6d04dff8-74fe-4097-ace0-4c437e5e0f9f) 2025-06-01 22:44:45.310563 | orchestrator | 2025-06-01 22:44:45.312294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:44:45.312315 | orchestrator | Sunday 01 June 2025 22:44:45 +0000 (0:00:00.646) 0:00:05.173 *********** 2025-06-01 22:44:46.043553 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 22:44:46.043972 | orchestrator | 2025-06-01 22:44:46.044164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:46.044704 | orchestrator | Sunday 01 June 2025 22:44:46 +0000 (0:00:00.734) 0:00:05.908 *********** 2025-06-01 22:44:46.441109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-01 22:44:46.442504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-01 22:44:46.443195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-01 22:44:46.444426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-01 22:44:46.446218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-01 22:44:46.447132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-01 22:44:46.447903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-01 22:44:46.447930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-01 22:44:46.448619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-01 22:44:46.449300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-01 22:44:46.450298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-01 22:44:46.451184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-01 22:44:46.452444 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-01 22:44:46.453361 | orchestrator | 2025-06-01 22:44:46.454441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:46.455819 | orchestrator | Sunday 01 June 2025 22:44:46 +0000 (0:00:00.398) 0:00:06.306 *********** 2025-06-01 22:44:46.649962 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:46.651038 | orchestrator | 2025-06-01 22:44:46.651435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:46.653486 | orchestrator | Sunday 01 June 2025 22:44:46 +0000 (0:00:00.207) 0:00:06.514 *********** 2025-06-01 22:44:46.845074 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:46.846118 | orchestrator | 2025-06-01 22:44:46.847562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:46.849067 | orchestrator | Sunday 01 June 2025 22:44:46 +0000 (0:00:00.196) 0:00:06.710 *********** 2025-06-01 22:44:47.051622 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:47.051745 | orchestrator | 2025-06-01 22:44:47.053152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:47.053775 | orchestrator | Sunday 01 June 2025 22:44:47 +0000 (0:00:00.205) 0:00:06.915 *********** 2025-06-01 22:44:47.250975 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:47.251942 | orchestrator | 2025-06-01 22:44:47.253683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:47.254548 | orchestrator | Sunday 01 June 2025 22:44:47 +0000 (0:00:00.198) 0:00:07.114 *********** 2025-06-01 22:44:47.440293 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:47.440981 | orchestrator | 2025-06-01 22:44:47.441938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:47.442628 | orchestrator | Sunday 01 June 2025 22:44:47 +0000 (0:00:00.191) 0:00:07.305 *********** 2025-06-01 22:44:47.637846 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:47.638217 | orchestrator | 2025-06-01 22:44:47.639160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:47.640376 | orchestrator | Sunday 01 June 2025 22:44:47 +0000 (0:00:00.196) 0:00:07.502 *********** 2025-06-01 22:44:47.832680 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:47.834115 | orchestrator | 2025-06-01 22:44:47.834334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:47.835533 | orchestrator | Sunday 01 June 2025 22:44:47 +0000 (0:00:00.195) 0:00:07.698 *********** 2025-06-01 22:44:48.043166 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:48.045812 | orchestrator | 2025-06-01 22:44:48.046148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:48.048071 | orchestrator | Sunday 01 June 2025 22:44:48 +0000 (0:00:00.210) 0:00:07.908 *********** 2025-06-01 22:44:49.144556 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-01 22:44:49.144661 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-01 22:44:49.144676 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-01 22:44:49.145043 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-01 22:44:49.145112 | orchestrator | 2025-06-01 22:44:49.146140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:49.146435 | orchestrator | Sunday 01 June 2025 22:44:49 +0000 (0:00:01.095) 0:00:09.004 *********** 2025-06-01 22:44:49.398893 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:49.399012 | orchestrator | 2025-06-01 22:44:49.399037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:49.399057 | orchestrator | Sunday 01 June 2025 22:44:49 +0000 (0:00:00.256) 0:00:09.261 *********** 2025-06-01 22:44:49.585842 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:49.586011 | orchestrator | 2025-06-01 22:44:49.586671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:49.587291 | orchestrator | Sunday 01 June 2025 22:44:49 +0000 (0:00:00.189) 0:00:09.450 *********** 2025-06-01 22:44:49.811132 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:49.812908 | orchestrator | 2025-06-01 22:44:49.813647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:44:49.814694 | orchestrator | Sunday 01 June 2025 22:44:49 +0000 (0:00:00.223) 0:00:09.674 *********** 2025-06-01 22:44:49.995839 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:49.996629 | orchestrator | 2025-06-01 22:44:49.998379 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-01 22:44:49.998968 | orchestrator | Sunday 01 June 2025 22:44:49 +0000 (0:00:00.184) 0:00:09.859 *********** 2025-06-01 22:44:50.148910 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:50.149390 | orchestrator | 2025-06-01 22:44:50.150708 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-01 22:44:50.151593 | orchestrator | Sunday 01 June 2025 22:44:50 +0000 (0:00:00.154) 0:00:10.014 *********** 2025-06-01 22:44:50.353724 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '836f126b-3930-552c-8c28-37312a7074e3'}}) 2025-06-01 22:44:50.353970 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04cd8323-667e-5571-83c4-b35d38a67016'}}) 2025-06-01 22:44:50.356217 | orchestrator | 2025-06-01 22:44:50.357093 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-01 22:44:50.357987 | orchestrator | Sunday 01 June 2025 22:44:50 +0000 (0:00:00.204) 0:00:10.218 *********** 2025-06-01 22:44:52.361825 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'}) 2025-06-01 22:44:52.361962 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'}) 2025-06-01 22:44:52.363035 | orchestrator | 2025-06-01 22:44:52.366192 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-01 22:44:52.366598 | orchestrator | Sunday 01 June 2025 22:44:52 +0000 (0:00:02.006) 0:00:12.225 *********** 2025-06-01 22:44:52.514289 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:44:52.514715 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:44:52.516367 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:52.517230 | orchestrator | 2025-06-01 22:44:52.518735 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-01 22:44:52.519711 | orchestrator | Sunday 01 June 2025 22:44:52 +0000 (0:00:00.153) 0:00:12.378 *********** 2025-06-01 22:44:53.948759 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'}) 2025-06-01 22:44:53.948997 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'}) 2025-06-01 22:44:53.949469 | orchestrator | 2025-06-01 22:44:53.950451 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-01 22:44:53.950663 | orchestrator | Sunday 01 June 2025 22:44:53 +0000 (0:00:01.434) 0:00:13.812 *********** 2025-06-01 22:44:54.091317 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:44:54.092093 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:44:54.093702 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:54.095011 | orchestrator | 2025-06-01 22:44:54.095232 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-01 22:44:54.096219 | orchestrator | Sunday 01 June 2025 22:44:54 +0000 (0:00:00.143) 0:00:13.956 *********** 2025-06-01 22:44:54.242922 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:54.244309 | orchestrator | 2025-06-01 22:44:54.244620 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-01 22:44:54.246860 | orchestrator | Sunday 01 June 2025 22:44:54 +0000 (0:00:00.151) 0:00:14.108 *********** 2025-06-01 22:44:54.619007 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:44:54.620066 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:44:54.621390 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:54.622122 | orchestrator | 2025-06-01 22:44:54.623023 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-01 22:44:54.623853 | orchestrator | Sunday 01 June 2025 22:44:54 +0000 (0:00:00.374) 0:00:14.482 *********** 2025-06-01 22:44:54.798122 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:54.798248 | orchestrator | 2025-06-01 22:44:54.798767 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-01 22:44:54.799854 | orchestrator | Sunday 01 June 2025 22:44:54 +0000 (0:00:00.179) 0:00:14.661 *********** 2025-06-01 22:44:54.952489 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:44:54.953131 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:44:54.955199 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:54.956892 | orchestrator | 2025-06-01 22:44:54.957830 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-01 22:44:54.958394 | orchestrator | Sunday 01 June 2025 22:44:54 +0000 (0:00:00.155) 0:00:14.817 *********** 2025-06-01 22:44:55.091426 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:55.092664 | orchestrator | 2025-06-01 22:44:55.095028 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-01 22:44:55.096427 | orchestrator | Sunday 01 June 2025 22:44:55 +0000 (0:00:00.138) 0:00:14.956 *********** 2025-06-01 22:44:55.252886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:44:55.254819 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:44:55.256068 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:55.256896 | orchestrator | 2025-06-01 22:44:55.257733 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-01 22:44:55.258682 | orchestrator | Sunday 01 June 2025 22:44:55 +0000 (0:00:00.161) 0:00:15.118 *********** 2025-06-01 22:44:55.399297 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:44:55.401305 | orchestrator | 2025-06-01 22:44:55.403449 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-01 22:44:55.404446 | orchestrator | Sunday 01 June 2025 22:44:55 +0000 (0:00:00.144) 0:00:15.263 *********** 2025-06-01 22:44:55.568112 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:44:55.570107 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:44:55.570983 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:55.572211 | orchestrator | 2025-06-01 22:44:55.572498 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-01 22:44:55.573018 | orchestrator | Sunday 01 June 2025 22:44:55 +0000 (0:00:00.169) 0:00:15.432 *********** 2025-06-01 22:44:55.726962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:44:55.728074 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:44:55.728448 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:55.730229 | orchestrator | 2025-06-01 22:44:55.730454 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-01 22:44:55.731707 | orchestrator | Sunday 01 June 2025 22:44:55 +0000 (0:00:00.159) 0:00:15.591 *********** 2025-06-01 22:44:55.884073 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:44:55.885017 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:44:55.885959 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:55.886960 | orchestrator | 2025-06-01 22:44:55.888770 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-01 22:44:55.888813 | orchestrator | Sunday 01 June 2025 22:44:55 +0000 (0:00:00.156) 0:00:15.748 *********** 2025-06-01 22:44:56.027463 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:56.027652 | orchestrator | 2025-06-01 22:44:56.028246 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-01 22:44:56.028996 | orchestrator | Sunday 01 June 2025 22:44:56 +0000 (0:00:00.142) 0:00:15.891 *********** 2025-06-01 22:44:56.158821 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:56.159932 | orchestrator | 2025-06-01 22:44:56.161359 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-01 22:44:56.162196 | orchestrator | Sunday 01 June 2025 22:44:56 +0000 (0:00:00.131) 0:00:16.023 *********** 2025-06-01 22:44:56.287768 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:56.288382 | orchestrator | 2025-06-01 22:44:56.288877 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-01 22:44:56.289775 | orchestrator | Sunday 01 June 2025 22:44:56 +0000 (0:00:00.129) 0:00:16.152 *********** 2025-06-01 22:44:56.634765 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 22:44:56.638625 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-01 22:44:56.639303 | orchestrator | } 2025-06-01 22:44:56.640160 | orchestrator | 2025-06-01 22:44:56.640994 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-01 22:44:56.641933 | orchestrator | Sunday 01 June 2025 22:44:56 +0000 (0:00:00.345) 0:00:16.497 *********** 2025-06-01 22:44:56.768966 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 22:44:56.769160 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-01 22:44:56.770652 | orchestrator | } 2025-06-01 22:44:56.770895 | orchestrator | 2025-06-01 22:44:56.771616 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-01 22:44:56.772228 | orchestrator | Sunday 01 June 2025 22:44:56 +0000 (0:00:00.136) 0:00:16.634 *********** 2025-06-01 22:44:56.915150 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 22:44:56.915356 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-01 22:44:56.916468 | orchestrator | } 2025-06-01 22:44:56.916521 | orchestrator | 2025-06-01 22:44:56.917295 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-01 22:44:56.919279 | orchestrator | Sunday 01 June 2025 22:44:56 +0000 (0:00:00.146) 0:00:16.780 *********** 2025-06-01 22:44:57.574132 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:44:57.574743 | orchestrator | 2025-06-01 22:44:57.576640 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-01 22:44:57.577741 | orchestrator | Sunday 01 June 2025 22:44:57 +0000 (0:00:00.656) 0:00:17.436 *********** 2025-06-01 22:44:58.094174 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:44:58.094941 | orchestrator | 2025-06-01 22:44:58.095864 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-01 22:44:58.096962 | orchestrator | Sunday 01 June 2025 22:44:58 +0000 (0:00:00.522) 0:00:17.959 *********** 2025-06-01 22:44:58.625722 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:44:58.628492 | orchestrator | 2025-06-01 22:44:58.629135 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-01 22:44:58.631072 | orchestrator | Sunday 01 June 2025 22:44:58 +0000 (0:00:00.528) 0:00:18.488 *********** 2025-06-01 22:44:58.778356 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:44:58.779251 | orchestrator | 2025-06-01 22:44:58.781241 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-01 22:44:58.781681 | orchestrator | Sunday 01 June 2025 22:44:58 +0000 (0:00:00.153) 0:00:18.641 *********** 2025-06-01 22:44:58.907985 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:58.908359 | orchestrator | 2025-06-01 22:44:58.909407 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-01 22:44:58.911437 | orchestrator | Sunday 01 June 2025 22:44:58 +0000 (0:00:00.131) 0:00:18.772 *********** 2025-06-01 22:44:59.055942 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:59.057153 | orchestrator | 2025-06-01 22:44:59.057444 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-01 22:44:59.058272 | orchestrator | Sunday 01 June 2025 22:44:59 +0000 (0:00:00.146) 0:00:18.919 *********** 2025-06-01 22:44:59.214986 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 22:44:59.215942 | orchestrator |  "vgs_report": { 2025-06-01 22:44:59.217673 | orchestrator |  "vg": [] 2025-06-01 22:44:59.219137 | orchestrator |  } 2025-06-01 22:44:59.219507 | orchestrator | } 2025-06-01 22:44:59.220706 | orchestrator | 2025-06-01 22:44:59.221543 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-01 22:44:59.221914 | orchestrator | Sunday 01 June 2025 22:44:59 +0000 (0:00:00.158) 0:00:19.078 *********** 2025-06-01 22:44:59.357091 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:59.357684 | orchestrator | 2025-06-01 22:44:59.358529 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-01 22:44:59.358997 | orchestrator | Sunday 01 June 2025 22:44:59 +0000 (0:00:00.142) 0:00:19.220 *********** 2025-06-01 22:44:59.491371 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:59.491549 | orchestrator | 2025-06-01 22:44:59.492340 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-01 22:44:59.495202 | orchestrator | Sunday 01 June 2025 22:44:59 +0000 (0:00:00.134) 0:00:19.355 *********** 2025-06-01 22:44:59.843691 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:59.844213 | orchestrator | 2025-06-01 22:44:59.845584 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-01 22:44:59.847913 | orchestrator | Sunday 01 June 2025 22:44:59 +0000 (0:00:00.353) 0:00:19.708 *********** 2025-06-01 22:44:59.983888 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:44:59.984284 | orchestrator | 2025-06-01 22:44:59.985398 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-01 22:44:59.986201 | orchestrator | Sunday 01 June 2025 22:44:59 +0000 (0:00:00.139) 0:00:19.848 *********** 2025-06-01 22:45:00.139692 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:00.140310 | orchestrator | 2025-06-01 22:45:00.141738 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-01 22:45:00.143385 | orchestrator | Sunday 01 June 2025 22:45:00 +0000 (0:00:00.155) 0:00:20.003 *********** 2025-06-01 22:45:00.284219 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:00.284299 | orchestrator | 2025-06-01 22:45:00.286779 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-01 22:45:00.287664 | orchestrator | Sunday 01 June 2025 22:45:00 +0000 (0:00:00.144) 0:00:20.148 *********** 2025-06-01 22:45:00.442600 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:00.442994 | orchestrator | 2025-06-01 22:45:00.444006 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-01 22:45:00.446114 | orchestrator | Sunday 01 June 2025 22:45:00 +0000 (0:00:00.157) 0:00:20.306 *********** 2025-06-01 22:45:00.582298 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:00.584048 | orchestrator | 2025-06-01 22:45:00.585127 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-01 22:45:00.586078 | orchestrator | Sunday 01 June 2025 22:45:00 +0000 (0:00:00.138) 0:00:20.445 *********** 2025-06-01 22:45:00.711422 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:00.711927 | orchestrator | 2025-06-01 22:45:00.713471 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-01 22:45:00.714188 | orchestrator | Sunday 01 June 2025 22:45:00 +0000 (0:00:00.131) 0:00:20.576 *********** 2025-06-01 22:45:00.846154 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:00.846334 | orchestrator | 2025-06-01 22:45:00.848353 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-01 22:45:00.849705 | orchestrator | Sunday 01 June 2025 22:45:00 +0000 (0:00:00.134) 0:00:20.711 *********** 2025-06-01 22:45:00.976865 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:00.977760 | orchestrator | 2025-06-01 22:45:00.979264 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-01 22:45:00.980377 | orchestrator | Sunday 01 June 2025 22:45:00 +0000 (0:00:00.130) 0:00:20.841 *********** 2025-06-01 22:45:01.101943 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:01.102828 | orchestrator | 2025-06-01 22:45:01.104436 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-01 22:45:01.105738 | orchestrator | Sunday 01 June 2025 22:45:01 +0000 (0:00:00.125) 0:00:20.966 *********** 2025-06-01 22:45:01.246947 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:01.247145 | orchestrator | 2025-06-01 22:45:01.248146 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-01 22:45:01.249317 | orchestrator | Sunday 01 June 2025 22:45:01 +0000 (0:00:00.145) 0:00:21.112 *********** 2025-06-01 22:45:01.394385 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:01.395124 | orchestrator | 2025-06-01 22:45:01.396135 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-01 22:45:01.397384 | orchestrator | Sunday 01 June 2025 22:45:01 +0000 (0:00:00.146) 0:00:21.259 *********** 2025-06-01 22:45:01.550358 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:45:01.550555 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:45:01.552967 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:01.552990 | orchestrator | 2025-06-01 22:45:01.553469 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-01 22:45:01.554682 | orchestrator | Sunday 01 June 2025 22:45:01 +0000 (0:00:00.154) 0:00:21.414 *********** 2025-06-01 22:45:01.918079 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:45:01.918184 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:45:01.919847 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:01.921435 | orchestrator | 2025-06-01 22:45:01.922407 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-01 22:45:01.923102 | orchestrator | Sunday 01 June 2025 22:45:01 +0000 (0:00:00.367) 0:00:21.781 *********** 2025-06-01 22:45:02.103025 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:45:02.103232 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:45:02.103777 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:02.105158 | orchestrator | 2025-06-01 22:45:02.106762 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-01 22:45:02.106808 | orchestrator | Sunday 01 June 2025 22:45:02 +0000 (0:00:00.186) 0:00:21.967 *********** 2025-06-01 22:45:02.273616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:45:02.274090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:45:02.275663 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:02.277063 | orchestrator | 2025-06-01 22:45:02.277510 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-01 22:45:02.279279 | orchestrator | Sunday 01 June 2025 22:45:02 +0000 (0:00:00.170) 0:00:22.138 *********** 2025-06-01 22:45:02.443513 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:45:02.445063 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:45:02.446522 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:02.447445 | orchestrator | 2025-06-01 22:45:02.448121 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-01 22:45:02.448754 | orchestrator | Sunday 01 June 2025 22:45:02 +0000 (0:00:00.169) 0:00:22.307 *********** 2025-06-01 22:45:02.599350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:45:02.600661 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:45:02.602098 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:02.602757 | orchestrator | 2025-06-01 22:45:02.603685 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-01 22:45:02.604638 | orchestrator | Sunday 01 June 2025 22:45:02 +0000 (0:00:00.155) 0:00:22.463 *********** 2025-06-01 22:45:02.741951 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:45:02.742426 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:45:02.743816 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:02.744723 | orchestrator | 2025-06-01 22:45:02.745666 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-01 22:45:02.746836 | orchestrator | Sunday 01 June 2025 22:45:02 +0000 (0:00:00.143) 0:00:22.606 *********** 2025-06-01 22:45:02.902642 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:45:02.903372 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:45:02.905889 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:02.908236 | orchestrator | 2025-06-01 22:45:02.908268 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-01 22:45:02.909727 | orchestrator | Sunday 01 June 2025 22:45:02 +0000 (0:00:00.160) 0:00:22.767 *********** 2025-06-01 22:45:03.408190 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:45:03.408289 | orchestrator | 2025-06-01 22:45:03.408645 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-01 22:45:03.409580 | orchestrator | Sunday 01 June 2025 22:45:03 +0000 (0:00:00.504) 0:00:23.272 *********** 2025-06-01 22:45:03.932024 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:45:03.932273 | orchestrator | 2025-06-01 22:45:03.933490 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-01 22:45:03.934155 | orchestrator | Sunday 01 June 2025 22:45:03 +0000 (0:00:00.523) 0:00:23.795 *********** 2025-06-01 22:45:04.089974 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:45:04.090878 | orchestrator | 2025-06-01 22:45:04.091326 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-01 22:45:04.091743 | orchestrator | Sunday 01 June 2025 22:45:04 +0000 (0:00:00.159) 0:00:23.954 *********** 2025-06-01 22:45:04.260068 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'vg_name': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'}) 2025-06-01 22:45:04.261191 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'vg_name': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'}) 2025-06-01 22:45:04.262335 | orchestrator | 2025-06-01 22:45:04.264427 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-01 22:45:04.264451 | orchestrator | Sunday 01 June 2025 22:45:04 +0000 (0:00:00.170) 0:00:24.125 *********** 2025-06-01 22:45:04.411621 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:45:04.413039 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:45:04.413826 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:04.414966 | orchestrator | 2025-06-01 22:45:04.415722 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-01 22:45:04.416633 | orchestrator | Sunday 01 June 2025 22:45:04 +0000 (0:00:00.151) 0:00:24.276 *********** 2025-06-01 22:45:04.759896 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:45:04.759997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:45:04.764107 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:04.764139 | orchestrator | 2025-06-01 22:45:04.764152 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-01 22:45:04.764165 | orchestrator | Sunday 01 June 2025 22:45:04 +0000 (0:00:00.346) 0:00:24.623 *********** 2025-06-01 22:45:04.922446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'})  2025-06-01 22:45:04.922891 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'})  2025-06-01 22:45:04.923158 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:45:04.924477 | orchestrator | 2025-06-01 22:45:04.924629 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-01 22:45:04.925137 | orchestrator | Sunday 01 June 2025 22:45:04 +0000 (0:00:00.162) 0:00:24.786 *********** 2025-06-01 22:45:05.218447 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 22:45:05.218608 | orchestrator |  "lvm_report": { 2025-06-01 22:45:05.221707 | orchestrator |  "lv": [ 2025-06-01 22:45:05.222429 | orchestrator |  { 2025-06-01 22:45:05.222738 | orchestrator |  "lv_name": "osd-block-04cd8323-667e-5571-83c4-b35d38a67016", 2025-06-01 22:45:05.224929 | orchestrator |  "vg_name": "ceph-04cd8323-667e-5571-83c4-b35d38a67016" 2025-06-01 22:45:05.225511 | orchestrator |  }, 2025-06-01 22:45:05.226116 | orchestrator |  { 2025-06-01 22:45:05.226613 | orchestrator |  "lv_name": "osd-block-836f126b-3930-552c-8c28-37312a7074e3", 2025-06-01 22:45:05.227124 | orchestrator |  "vg_name": "ceph-836f126b-3930-552c-8c28-37312a7074e3" 2025-06-01 22:45:05.227503 | orchestrator |  } 2025-06-01 22:45:05.228611 | orchestrator |  ], 2025-06-01 22:45:05.229155 | orchestrator |  "pv": [ 2025-06-01 22:45:05.229930 | orchestrator |  { 2025-06-01 22:45:05.230342 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-01 22:45:05.230732 | orchestrator |  "vg_name": "ceph-836f126b-3930-552c-8c28-37312a7074e3" 2025-06-01 22:45:05.231160 | orchestrator |  }, 2025-06-01 22:45:05.231645 | orchestrator |  { 2025-06-01 22:45:05.232065 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-01 22:45:05.232425 | orchestrator |  "vg_name": "ceph-04cd8323-667e-5571-83c4-b35d38a67016" 2025-06-01 22:45:05.233035 | orchestrator |  } 2025-06-01 22:45:05.233955 | orchestrator |  ] 2025-06-01 22:45:05.234265 | orchestrator |  } 2025-06-01 22:45:05.235016 | orchestrator | } 2025-06-01 22:45:05.235417 | orchestrator | 2025-06-01 22:45:05.236292 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-01 22:45:05.236562 | orchestrator | 2025-06-01 22:45:05.237458 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 22:45:05.238143 | orchestrator | Sunday 01 June 2025 22:45:05 +0000 (0:00:00.297) 0:00:25.083 *********** 2025-06-01 22:45:05.489746 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-01 22:45:05.489959 | orchestrator | 2025-06-01 22:45:05.490505 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 22:45:05.491392 | orchestrator | Sunday 01 June 2025 22:45:05 +0000 (0:00:00.271) 0:00:25.355 *********** 2025-06-01 22:45:05.736296 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:45:05.736872 | orchestrator | 2025-06-01 22:45:05.738895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:05.739511 | orchestrator | Sunday 01 June 2025 22:45:05 +0000 (0:00:00.244) 0:00:25.599 *********** 2025-06-01 22:45:06.149117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-01 22:45:06.149244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-01 22:45:06.149910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-01 22:45:06.151550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-01 22:45:06.151727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-01 22:45:06.153419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-01 22:45:06.153536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-01 22:45:06.154799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-01 22:45:06.155248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-01 22:45:06.156123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-01 22:45:06.157282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-01 22:45:06.158323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-01 22:45:06.159199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-01 22:45:06.159588 | orchestrator | 2025-06-01 22:45:06.160613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:06.161267 | orchestrator | Sunday 01 June 2025 22:45:06 +0000 (0:00:00.414) 0:00:26.014 *********** 2025-06-01 22:45:06.345123 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:06.345644 | orchestrator | 2025-06-01 22:45:06.346723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:06.347707 | orchestrator | Sunday 01 June 2025 22:45:06 +0000 (0:00:00.196) 0:00:26.210 *********** 2025-06-01 22:45:06.560761 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:06.560906 | orchestrator | 2025-06-01 22:45:06.563849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:06.563884 | orchestrator | Sunday 01 June 2025 22:45:06 +0000 (0:00:00.209) 0:00:26.420 *********** 2025-06-01 22:45:06.770901 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:06.771004 | orchestrator | 2025-06-01 22:45:06.771941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:06.772853 | orchestrator | Sunday 01 June 2025 22:45:06 +0000 (0:00:00.211) 0:00:26.632 *********** 2025-06-01 22:45:07.383077 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:07.383371 | orchestrator | 2025-06-01 22:45:07.384031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:07.385632 | orchestrator | Sunday 01 June 2025 22:45:07 +0000 (0:00:00.615) 0:00:27.248 *********** 2025-06-01 22:45:07.617814 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:07.618561 | orchestrator | 2025-06-01 22:45:07.619628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:07.620946 | orchestrator | Sunday 01 June 2025 22:45:07 +0000 (0:00:00.233) 0:00:27.482 *********** 2025-06-01 22:45:07.823984 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:07.824837 | orchestrator | 2025-06-01 22:45:07.825629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:07.826845 | orchestrator | Sunday 01 June 2025 22:45:07 +0000 (0:00:00.205) 0:00:27.687 *********** 2025-06-01 22:45:08.023256 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:08.023355 | orchestrator | 2025-06-01 22:45:08.024704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:08.026515 | orchestrator | Sunday 01 June 2025 22:45:08 +0000 (0:00:00.200) 0:00:27.887 *********** 2025-06-01 22:45:08.229308 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:08.229915 | orchestrator | 2025-06-01 22:45:08.230428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:08.231739 | orchestrator | Sunday 01 June 2025 22:45:08 +0000 (0:00:00.206) 0:00:28.094 *********** 2025-06-01 22:45:08.635620 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66) 2025-06-01 22:45:08.636730 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66) 2025-06-01 22:45:08.638395 | orchestrator | 2025-06-01 22:45:08.641086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:08.641156 | orchestrator | Sunday 01 June 2025 22:45:08 +0000 (0:00:00.406) 0:00:28.500 *********** 2025-06-01 22:45:09.097672 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_540779ba-6163-469a-a896-cda4c9a0c816) 2025-06-01 22:45:09.099063 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_540779ba-6163-469a-a896-cda4c9a0c816) 2025-06-01 22:45:09.100326 | orchestrator | 2025-06-01 22:45:09.101202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:09.102203 | orchestrator | Sunday 01 June 2025 22:45:09 +0000 (0:00:00.459) 0:00:28.960 *********** 2025-06-01 22:45:09.514637 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a15d8421-e56a-4621-aed8-2eaa8f026081) 2025-06-01 22:45:09.515905 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a15d8421-e56a-4621-aed8-2eaa8f026081) 2025-06-01 22:45:09.516451 | orchestrator | 2025-06-01 22:45:09.517127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:09.517852 | orchestrator | Sunday 01 June 2025 22:45:09 +0000 (0:00:00.420) 0:00:29.380 *********** 2025-06-01 22:45:09.971248 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4) 2025-06-01 22:45:09.973569 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4) 2025-06-01 22:45:09.974212 | orchestrator | 2025-06-01 22:45:09.975061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:09.975492 | orchestrator | Sunday 01 June 2025 22:45:09 +0000 (0:00:00.454) 0:00:29.834 *********** 2025-06-01 22:45:10.335585 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 22:45:10.336197 | orchestrator | 2025-06-01 22:45:10.337004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:10.337997 | orchestrator | Sunday 01 June 2025 22:45:10 +0000 (0:00:00.365) 0:00:30.200 *********** 2025-06-01 22:45:10.984531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-01 22:45:10.984844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-01 22:45:10.986499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-01 22:45:10.986930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-01 22:45:10.988198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-01 22:45:10.989091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-01 22:45:10.989626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-01 22:45:10.990237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-01 22:45:10.990930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-01 22:45:10.991310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-01 22:45:10.991807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-01 22:45:10.992224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-01 22:45:10.992649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-01 22:45:10.993176 | orchestrator | 2025-06-01 22:45:10.993632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:10.994099 | orchestrator | Sunday 01 June 2025 22:45:10 +0000 (0:00:00.648) 0:00:30.849 *********** 2025-06-01 22:45:11.205038 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:11.205223 | orchestrator | 2025-06-01 22:45:11.205825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:11.206501 | orchestrator | Sunday 01 June 2025 22:45:11 +0000 (0:00:00.219) 0:00:31.068 *********** 2025-06-01 22:45:11.418351 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:11.418532 | orchestrator | 2025-06-01 22:45:11.419154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:11.419900 | orchestrator | Sunday 01 June 2025 22:45:11 +0000 (0:00:00.214) 0:00:31.283 *********** 2025-06-01 22:45:11.624256 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:11.625358 | orchestrator | 2025-06-01 22:45:11.626811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:11.627282 | orchestrator | Sunday 01 June 2025 22:45:11 +0000 (0:00:00.204) 0:00:31.487 *********** 2025-06-01 22:45:11.826282 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:11.828134 | orchestrator | 2025-06-01 22:45:11.828532 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:11.829838 | orchestrator | Sunday 01 June 2025 22:45:11 +0000 (0:00:00.200) 0:00:31.688 *********** 2025-06-01 22:45:12.013189 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:12.013395 | orchestrator | 2025-06-01 22:45:12.014975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:12.016231 | orchestrator | Sunday 01 June 2025 22:45:12 +0000 (0:00:00.189) 0:00:31.878 *********** 2025-06-01 22:45:12.232724 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:12.233301 | orchestrator | 2025-06-01 22:45:12.234713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:12.234978 | orchestrator | Sunday 01 June 2025 22:45:12 +0000 (0:00:00.220) 0:00:32.098 *********** 2025-06-01 22:45:12.422217 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:12.422374 | orchestrator | 2025-06-01 22:45:12.423226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:12.423642 | orchestrator | Sunday 01 June 2025 22:45:12 +0000 (0:00:00.189) 0:00:32.287 *********** 2025-06-01 22:45:12.615913 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:12.616665 | orchestrator | 2025-06-01 22:45:12.617367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:12.618226 | orchestrator | Sunday 01 June 2025 22:45:12 +0000 (0:00:00.193) 0:00:32.481 *********** 2025-06-01 22:45:13.460682 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-01 22:45:13.461456 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-01 22:45:13.462538 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-01 22:45:13.464536 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-01 22:45:13.465549 | orchestrator | 2025-06-01 22:45:13.466580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:13.467400 | orchestrator | Sunday 01 June 2025 22:45:13 +0000 (0:00:00.842) 0:00:33.323 *********** 2025-06-01 22:45:13.656417 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:13.657414 | orchestrator | 2025-06-01 22:45:13.658376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:13.659140 | orchestrator | Sunday 01 June 2025 22:45:13 +0000 (0:00:00.196) 0:00:33.520 *********** 2025-06-01 22:45:13.855679 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:13.856234 | orchestrator | 2025-06-01 22:45:13.857475 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:13.857985 | orchestrator | Sunday 01 June 2025 22:45:13 +0000 (0:00:00.199) 0:00:33.719 *********** 2025-06-01 22:45:14.561572 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:14.561681 | orchestrator | 2025-06-01 22:45:14.562013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:14.565120 | orchestrator | Sunday 01 June 2025 22:45:14 +0000 (0:00:00.706) 0:00:34.425 *********** 2025-06-01 22:45:14.786114 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:14.786557 | orchestrator | 2025-06-01 22:45:14.787418 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-01 22:45:14.788161 | orchestrator | Sunday 01 June 2025 22:45:14 +0000 (0:00:00.225) 0:00:34.651 *********** 2025-06-01 22:45:14.953888 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:14.954107 | orchestrator | 2025-06-01 22:45:14.954493 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-01 22:45:14.954899 | orchestrator | Sunday 01 June 2025 22:45:14 +0000 (0:00:00.167) 0:00:34.819 *********** 2025-06-01 22:45:15.150261 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '656e26cc-5762-5518-9587-501a37b6e3ae'}}) 2025-06-01 22:45:15.150440 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'}}) 2025-06-01 22:45:15.151208 | orchestrator | 2025-06-01 22:45:15.152009 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-01 22:45:15.152207 | orchestrator | Sunday 01 June 2025 22:45:15 +0000 (0:00:00.196) 0:00:35.015 *********** 2025-06-01 22:45:17.009494 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'}) 2025-06-01 22:45:17.010694 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'}) 2025-06-01 22:45:17.011837 | orchestrator | 2025-06-01 22:45:17.013343 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-01 22:45:17.015156 | orchestrator | Sunday 01 June 2025 22:45:17 +0000 (0:00:01.856) 0:00:36.872 *********** 2025-06-01 22:45:17.171273 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:17.171431 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:17.173279 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:17.175047 | orchestrator | 2025-06-01 22:45:17.176886 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-01 22:45:17.177689 | orchestrator | Sunday 01 June 2025 22:45:17 +0000 (0:00:00.162) 0:00:37.034 *********** 2025-06-01 22:45:18.492185 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'}) 2025-06-01 22:45:18.492381 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'}) 2025-06-01 22:45:18.493280 | orchestrator | 2025-06-01 22:45:18.494662 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-01 22:45:18.495336 | orchestrator | Sunday 01 June 2025 22:45:18 +0000 (0:00:01.321) 0:00:38.356 *********** 2025-06-01 22:45:18.651854 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:18.651978 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:18.652747 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:18.653821 | orchestrator | 2025-06-01 22:45:18.654819 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-01 22:45:18.655148 | orchestrator | Sunday 01 June 2025 22:45:18 +0000 (0:00:00.159) 0:00:38.515 *********** 2025-06-01 22:45:18.791598 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:18.792230 | orchestrator | 2025-06-01 22:45:18.794101 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-01 22:45:18.795649 | orchestrator | Sunday 01 June 2025 22:45:18 +0000 (0:00:00.140) 0:00:38.656 *********** 2025-06-01 22:45:18.953832 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:18.954247 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:18.956035 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:18.956762 | orchestrator | 2025-06-01 22:45:18.957716 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-01 22:45:18.958702 | orchestrator | Sunday 01 June 2025 22:45:18 +0000 (0:00:00.162) 0:00:38.819 *********** 2025-06-01 22:45:19.099035 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:19.099855 | orchestrator | 2025-06-01 22:45:19.101292 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-01 22:45:19.102472 | orchestrator | Sunday 01 June 2025 22:45:19 +0000 (0:00:00.143) 0:00:38.962 *********** 2025-06-01 22:45:19.249896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:19.250610 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:19.251226 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:19.251987 | orchestrator | 2025-06-01 22:45:19.253214 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-01 22:45:19.253739 | orchestrator | Sunday 01 June 2025 22:45:19 +0000 (0:00:00.152) 0:00:39.115 *********** 2025-06-01 22:45:19.608718 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:19.609213 | orchestrator | 2025-06-01 22:45:19.610168 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-01 22:45:19.613588 | orchestrator | Sunday 01 June 2025 22:45:19 +0000 (0:00:00.357) 0:00:39.473 *********** 2025-06-01 22:45:19.767452 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:19.768426 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:19.769786 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:19.770922 | orchestrator | 2025-06-01 22:45:19.771789 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-01 22:45:19.772999 | orchestrator | Sunday 01 June 2025 22:45:19 +0000 (0:00:00.159) 0:00:39.632 *********** 2025-06-01 22:45:19.931353 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:45:19.932665 | orchestrator | 2025-06-01 22:45:19.933717 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-01 22:45:19.934472 | orchestrator | Sunday 01 June 2025 22:45:19 +0000 (0:00:00.163) 0:00:39.796 *********** 2025-06-01 22:45:20.106246 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:20.107470 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:20.108442 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:20.110364 | orchestrator | 2025-06-01 22:45:20.111205 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-01 22:45:20.112617 | orchestrator | Sunday 01 June 2025 22:45:20 +0000 (0:00:00.170) 0:00:39.966 *********** 2025-06-01 22:45:20.245870 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:20.246241 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:20.247887 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:20.249515 | orchestrator | 2025-06-01 22:45:20.252000 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-01 22:45:20.252027 | orchestrator | Sunday 01 June 2025 22:45:20 +0000 (0:00:00.144) 0:00:40.111 *********** 2025-06-01 22:45:20.409241 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:20.409825 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:20.410795 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:20.411787 | orchestrator | 2025-06-01 22:45:20.411810 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-01 22:45:20.411824 | orchestrator | Sunday 01 June 2025 22:45:20 +0000 (0:00:00.163) 0:00:40.274 *********** 2025-06-01 22:45:20.561724 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:20.562596 | orchestrator | 2025-06-01 22:45:20.562979 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-01 22:45:20.564011 | orchestrator | Sunday 01 June 2025 22:45:20 +0000 (0:00:00.152) 0:00:40.427 *********** 2025-06-01 22:45:20.707822 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:20.708509 | orchestrator | 2025-06-01 22:45:20.709523 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-01 22:45:20.710442 | orchestrator | Sunday 01 June 2025 22:45:20 +0000 (0:00:00.144) 0:00:40.572 *********** 2025-06-01 22:45:20.844366 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:20.846116 | orchestrator | 2025-06-01 22:45:20.847195 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-01 22:45:20.848162 | orchestrator | Sunday 01 June 2025 22:45:20 +0000 (0:00:00.136) 0:00:40.708 *********** 2025-06-01 22:45:20.992647 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 22:45:20.992900 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-01 22:45:20.993261 | orchestrator | } 2025-06-01 22:45:20.993726 | orchestrator | 2025-06-01 22:45:20.994126 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-01 22:45:20.994519 | orchestrator | Sunday 01 June 2025 22:45:20 +0000 (0:00:00.149) 0:00:40.858 *********** 2025-06-01 22:45:21.130388 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 22:45:21.130483 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-01 22:45:21.130498 | orchestrator | } 2025-06-01 22:45:21.130596 | orchestrator | 2025-06-01 22:45:21.130614 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-01 22:45:21.131254 | orchestrator | Sunday 01 June 2025 22:45:21 +0000 (0:00:00.136) 0:00:40.995 *********** 2025-06-01 22:45:21.271538 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 22:45:21.272353 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-01 22:45:21.273160 | orchestrator | } 2025-06-01 22:45:21.273970 | orchestrator | 2025-06-01 22:45:21.276114 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-01 22:45:21.276565 | orchestrator | Sunday 01 June 2025 22:45:21 +0000 (0:00:00.140) 0:00:41.135 *********** 2025-06-01 22:45:21.973020 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:45:21.974282 | orchestrator | 2025-06-01 22:45:21.974319 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-01 22:45:21.975147 | orchestrator | Sunday 01 June 2025 22:45:21 +0000 (0:00:00.701) 0:00:41.837 *********** 2025-06-01 22:45:22.506252 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:45:22.507311 | orchestrator | 2025-06-01 22:45:22.508118 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-01 22:45:22.509288 | orchestrator | Sunday 01 June 2025 22:45:22 +0000 (0:00:00.531) 0:00:42.368 *********** 2025-06-01 22:45:23.033322 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:45:23.034334 | orchestrator | 2025-06-01 22:45:23.036185 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-01 22:45:23.037380 | orchestrator | Sunday 01 June 2025 22:45:23 +0000 (0:00:00.529) 0:00:42.897 *********** 2025-06-01 22:45:23.173040 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:45:23.173196 | orchestrator | 2025-06-01 22:45:23.175195 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-01 22:45:23.176050 | orchestrator | Sunday 01 June 2025 22:45:23 +0000 (0:00:00.139) 0:00:43.037 *********** 2025-06-01 22:45:23.284136 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:23.286344 | orchestrator | 2025-06-01 22:45:23.286395 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-01 22:45:23.286409 | orchestrator | Sunday 01 June 2025 22:45:23 +0000 (0:00:00.110) 0:00:43.147 *********** 2025-06-01 22:45:23.426366 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:23.426991 | orchestrator | 2025-06-01 22:45:23.427058 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-01 22:45:23.427715 | orchestrator | Sunday 01 June 2025 22:45:23 +0000 (0:00:00.143) 0:00:43.291 *********** 2025-06-01 22:45:23.568419 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 22:45:23.568930 | orchestrator |  "vgs_report": { 2025-06-01 22:45:23.569486 | orchestrator |  "vg": [] 2025-06-01 22:45:23.569949 | orchestrator |  } 2025-06-01 22:45:23.571215 | orchestrator | } 2025-06-01 22:45:23.571247 | orchestrator | 2025-06-01 22:45:23.571495 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-01 22:45:23.575397 | orchestrator | Sunday 01 June 2025 22:45:23 +0000 (0:00:00.143) 0:00:43.434 *********** 2025-06-01 22:45:23.716636 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:23.717046 | orchestrator | 2025-06-01 22:45:23.717849 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-01 22:45:23.718793 | orchestrator | Sunday 01 June 2025 22:45:23 +0000 (0:00:00.146) 0:00:43.581 *********** 2025-06-01 22:45:23.855567 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:23.856100 | orchestrator | 2025-06-01 22:45:23.856851 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-01 22:45:23.857557 | orchestrator | Sunday 01 June 2025 22:45:23 +0000 (0:00:00.138) 0:00:43.720 *********** 2025-06-01 22:45:24.013914 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:24.014760 | orchestrator | 2025-06-01 22:45:24.018146 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-01 22:45:24.019608 | orchestrator | Sunday 01 June 2025 22:45:24 +0000 (0:00:00.157) 0:00:43.878 *********** 2025-06-01 22:45:24.145017 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:24.145699 | orchestrator | 2025-06-01 22:45:24.147362 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-01 22:45:24.148912 | orchestrator | Sunday 01 June 2025 22:45:24 +0000 (0:00:00.130) 0:00:44.008 *********** 2025-06-01 22:45:24.308932 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:24.309591 | orchestrator | 2025-06-01 22:45:24.311479 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-01 22:45:24.312707 | orchestrator | Sunday 01 June 2025 22:45:24 +0000 (0:00:00.162) 0:00:44.171 *********** 2025-06-01 22:45:24.668633 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:24.669747 | orchestrator | 2025-06-01 22:45:24.671341 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-01 22:45:24.672464 | orchestrator | Sunday 01 June 2025 22:45:24 +0000 (0:00:00.361) 0:00:44.533 *********** 2025-06-01 22:45:24.797831 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:24.800133 | orchestrator | 2025-06-01 22:45:24.801415 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-01 22:45:24.802432 | orchestrator | Sunday 01 June 2025 22:45:24 +0000 (0:00:00.129) 0:00:44.662 *********** 2025-06-01 22:45:24.922182 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:24.922334 | orchestrator | 2025-06-01 22:45:24.923431 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-01 22:45:24.924468 | orchestrator | Sunday 01 June 2025 22:45:24 +0000 (0:00:00.124) 0:00:44.787 *********** 2025-06-01 22:45:25.074495 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:25.074659 | orchestrator | 2025-06-01 22:45:25.075383 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-01 22:45:25.076297 | orchestrator | Sunday 01 June 2025 22:45:25 +0000 (0:00:00.152) 0:00:44.939 *********** 2025-06-01 22:45:25.211077 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:25.211269 | orchestrator | 2025-06-01 22:45:25.213320 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-01 22:45:25.215245 | orchestrator | Sunday 01 June 2025 22:45:25 +0000 (0:00:00.136) 0:00:45.076 *********** 2025-06-01 22:45:25.347415 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:25.349376 | orchestrator | 2025-06-01 22:45:25.349406 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-01 22:45:25.350256 | orchestrator | Sunday 01 June 2025 22:45:25 +0000 (0:00:00.136) 0:00:45.212 *********** 2025-06-01 22:45:25.493921 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:25.494456 | orchestrator | 2025-06-01 22:45:25.495477 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-01 22:45:25.497557 | orchestrator | Sunday 01 June 2025 22:45:25 +0000 (0:00:00.145) 0:00:45.358 *********** 2025-06-01 22:45:25.620967 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:25.621089 | orchestrator | 2025-06-01 22:45:25.622349 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-01 22:45:25.623288 | orchestrator | Sunday 01 June 2025 22:45:25 +0000 (0:00:00.123) 0:00:45.481 *********** 2025-06-01 22:45:25.764194 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:25.764358 | orchestrator | 2025-06-01 22:45:25.765335 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-01 22:45:25.766198 | orchestrator | Sunday 01 June 2025 22:45:25 +0000 (0:00:00.147) 0:00:45.629 *********** 2025-06-01 22:45:25.918649 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:25.919314 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:25.920023 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:25.920320 | orchestrator | 2025-06-01 22:45:25.922109 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-01 22:45:25.922340 | orchestrator | Sunday 01 June 2025 22:45:25 +0000 (0:00:00.154) 0:00:45.783 *********** 2025-06-01 22:45:26.082685 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:26.083639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:26.084565 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:26.086067 | orchestrator | 2025-06-01 22:45:26.087372 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-01 22:45:26.087888 | orchestrator | Sunday 01 June 2025 22:45:26 +0000 (0:00:00.163) 0:00:45.947 *********** 2025-06-01 22:45:26.229814 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:26.230456 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:26.231425 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:26.231582 | orchestrator | 2025-06-01 22:45:26.232173 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-01 22:45:26.233516 | orchestrator | Sunday 01 June 2025 22:45:26 +0000 (0:00:00.146) 0:00:46.093 *********** 2025-06-01 22:45:26.603712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:26.603907 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:26.605212 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:26.606421 | orchestrator | 2025-06-01 22:45:26.606631 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-01 22:45:26.607443 | orchestrator | Sunday 01 June 2025 22:45:26 +0000 (0:00:00.374) 0:00:46.468 *********** 2025-06-01 22:45:26.775868 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:26.776076 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:26.777727 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:26.778378 | orchestrator | 2025-06-01 22:45:26.778994 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-01 22:45:26.779538 | orchestrator | Sunday 01 June 2025 22:45:26 +0000 (0:00:00.172) 0:00:46.640 *********** 2025-06-01 22:45:26.937261 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:26.938300 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:26.941123 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:26.941145 | orchestrator | 2025-06-01 22:45:26.942066 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-01 22:45:26.942819 | orchestrator | Sunday 01 June 2025 22:45:26 +0000 (0:00:00.160) 0:00:46.801 *********** 2025-06-01 22:45:27.099147 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:27.099558 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:27.101069 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:27.101496 | orchestrator | 2025-06-01 22:45:27.102591 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-01 22:45:27.103281 | orchestrator | Sunday 01 June 2025 22:45:27 +0000 (0:00:00.162) 0:00:46.963 *********** 2025-06-01 22:45:27.261881 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:27.263036 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:27.264678 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:27.264794 | orchestrator | 2025-06-01 22:45:27.265167 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-01 22:45:27.266134 | orchestrator | Sunday 01 June 2025 22:45:27 +0000 (0:00:00.162) 0:00:47.126 *********** 2025-06-01 22:45:27.788909 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:45:27.789365 | orchestrator | 2025-06-01 22:45:27.790486 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-01 22:45:27.791452 | orchestrator | Sunday 01 June 2025 22:45:27 +0000 (0:00:00.527) 0:00:47.653 *********** 2025-06-01 22:45:28.311871 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:45:28.313097 | orchestrator | 2025-06-01 22:45:28.313963 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-01 22:45:28.315321 | orchestrator | Sunday 01 June 2025 22:45:28 +0000 (0:00:00.522) 0:00:48.175 *********** 2025-06-01 22:45:28.463349 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:45:28.465206 | orchestrator | 2025-06-01 22:45:28.466248 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-01 22:45:28.467506 | orchestrator | Sunday 01 June 2025 22:45:28 +0000 (0:00:00.152) 0:00:48.328 *********** 2025-06-01 22:45:28.640716 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'vg_name': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'}) 2025-06-01 22:45:28.641570 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'vg_name': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'}) 2025-06-01 22:45:28.642502 | orchestrator | 2025-06-01 22:45:28.643414 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-01 22:45:28.644139 | orchestrator | Sunday 01 June 2025 22:45:28 +0000 (0:00:00.177) 0:00:48.505 *********** 2025-06-01 22:45:28.800641 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:28.801705 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:28.802313 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:28.803467 | orchestrator | 2025-06-01 22:45:28.804465 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-01 22:45:28.805644 | orchestrator | Sunday 01 June 2025 22:45:28 +0000 (0:00:00.159) 0:00:48.665 *********** 2025-06-01 22:45:28.971410 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:28.971990 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:28.972590 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:28.973094 | orchestrator | 2025-06-01 22:45:28.974186 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-01 22:45:28.974696 | orchestrator | Sunday 01 June 2025 22:45:28 +0000 (0:00:00.170) 0:00:48.836 *********** 2025-06-01 22:45:29.126200 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'})  2025-06-01 22:45:29.127038 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'})  2025-06-01 22:45:29.128252 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:45:29.130114 | orchestrator | 2025-06-01 22:45:29.130143 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-01 22:45:29.130726 | orchestrator | Sunday 01 June 2025 22:45:29 +0000 (0:00:00.154) 0:00:48.991 *********** 2025-06-01 22:45:29.618555 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 22:45:29.619124 | orchestrator |  "lvm_report": { 2025-06-01 22:45:29.620050 | orchestrator |  "lv": [ 2025-06-01 22:45:29.621190 | orchestrator |  { 2025-06-01 22:45:29.623331 | orchestrator |  "lv_name": "osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c", 2025-06-01 22:45:29.624454 | orchestrator |  "vg_name": "ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c" 2025-06-01 22:45:29.625373 | orchestrator |  }, 2025-06-01 22:45:29.625724 | orchestrator |  { 2025-06-01 22:45:29.626648 | orchestrator |  "lv_name": "osd-block-656e26cc-5762-5518-9587-501a37b6e3ae", 2025-06-01 22:45:29.627231 | orchestrator |  "vg_name": "ceph-656e26cc-5762-5518-9587-501a37b6e3ae" 2025-06-01 22:45:29.627707 | orchestrator |  } 2025-06-01 22:45:29.628530 | orchestrator |  ], 2025-06-01 22:45:29.629475 | orchestrator |  "pv": [ 2025-06-01 22:45:29.630120 | orchestrator |  { 2025-06-01 22:45:29.630532 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-01 22:45:29.631015 | orchestrator |  "vg_name": "ceph-656e26cc-5762-5518-9587-501a37b6e3ae" 2025-06-01 22:45:29.631507 | orchestrator |  }, 2025-06-01 22:45:29.632362 | orchestrator |  { 2025-06-01 22:45:29.632546 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-01 22:45:29.633226 | orchestrator |  "vg_name": "ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c" 2025-06-01 22:45:29.634542 | orchestrator |  } 2025-06-01 22:45:29.635447 | orchestrator |  ] 2025-06-01 22:45:29.636254 | orchestrator |  } 2025-06-01 22:45:29.637063 | orchestrator | } 2025-06-01 22:45:29.638011 | orchestrator | 2025-06-01 22:45:29.638377 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-01 22:45:29.639267 | orchestrator | 2025-06-01 22:45:29.640570 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 22:45:29.640872 | orchestrator | Sunday 01 June 2025 22:45:29 +0000 (0:00:00.492) 0:00:49.484 *********** 2025-06-01 22:45:29.879283 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-01 22:45:29.879409 | orchestrator | 2025-06-01 22:45:29.879486 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 22:45:29.880237 | orchestrator | Sunday 01 June 2025 22:45:29 +0000 (0:00:00.259) 0:00:49.743 *********** 2025-06-01 22:45:30.101499 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:45:30.102219 | orchestrator | 2025-06-01 22:45:30.106091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:30.106171 | orchestrator | Sunday 01 June 2025 22:45:30 +0000 (0:00:00.222) 0:00:49.965 *********** 2025-06-01 22:45:30.517512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-01 22:45:30.517617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-01 22:45:30.518855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-01 22:45:30.520435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-01 22:45:30.520459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-01 22:45:30.521064 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-01 22:45:30.521696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-01 22:45:30.522167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-01 22:45:30.522710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-01 22:45:30.523210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-01 22:45:30.524081 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-01 22:45:30.524574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-01 22:45:30.525841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-01 22:45:30.526630 | orchestrator | 2025-06-01 22:45:30.528319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:30.529311 | orchestrator | Sunday 01 June 2025 22:45:30 +0000 (0:00:00.415) 0:00:50.381 *********** 2025-06-01 22:45:30.758713 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:30.759362 | orchestrator | 2025-06-01 22:45:30.760140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:30.761400 | orchestrator | Sunday 01 June 2025 22:45:30 +0000 (0:00:00.242) 0:00:50.623 *********** 2025-06-01 22:45:30.968374 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:30.969276 | orchestrator | 2025-06-01 22:45:30.969475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:30.970664 | orchestrator | Sunday 01 June 2025 22:45:30 +0000 (0:00:00.210) 0:00:50.834 *********** 2025-06-01 22:45:31.169943 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:31.170189 | orchestrator | 2025-06-01 22:45:31.171811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:31.172313 | orchestrator | Sunday 01 June 2025 22:45:31 +0000 (0:00:00.200) 0:00:51.034 *********** 2025-06-01 22:45:31.383484 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:31.383575 | orchestrator | 2025-06-01 22:45:31.384803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:31.385538 | orchestrator | Sunday 01 June 2025 22:45:31 +0000 (0:00:00.210) 0:00:51.245 *********** 2025-06-01 22:45:31.590815 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:31.591836 | orchestrator | 2025-06-01 22:45:31.592104 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:31.593185 | orchestrator | Sunday 01 June 2025 22:45:31 +0000 (0:00:00.210) 0:00:51.456 *********** 2025-06-01 22:45:32.207996 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:32.208169 | orchestrator | 2025-06-01 22:45:32.208811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:32.209562 | orchestrator | Sunday 01 June 2025 22:45:32 +0000 (0:00:00.616) 0:00:52.073 *********** 2025-06-01 22:45:32.410951 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:32.412341 | orchestrator | 2025-06-01 22:45:32.413094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:32.414704 | orchestrator | Sunday 01 June 2025 22:45:32 +0000 (0:00:00.201) 0:00:52.274 *********** 2025-06-01 22:45:32.596727 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:32.597628 | orchestrator | 2025-06-01 22:45:32.598295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:32.598708 | orchestrator | Sunday 01 June 2025 22:45:32 +0000 (0:00:00.187) 0:00:52.462 *********** 2025-06-01 22:45:33.011525 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b) 2025-06-01 22:45:33.012684 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b) 2025-06-01 22:45:33.014214 | orchestrator | 2025-06-01 22:45:33.015521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:33.016255 | orchestrator | Sunday 01 June 2025 22:45:33 +0000 (0:00:00.413) 0:00:52.876 *********** 2025-06-01 22:45:33.462210 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8eb07f49-902f-451e-9ead-836ebd4b9d37) 2025-06-01 22:45:33.463508 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8eb07f49-902f-451e-9ead-836ebd4b9d37) 2025-06-01 22:45:33.463646 | orchestrator | 2025-06-01 22:45:33.464612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:33.465403 | orchestrator | Sunday 01 June 2025 22:45:33 +0000 (0:00:00.447) 0:00:53.324 *********** 2025-06-01 22:45:33.901647 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_44465191-0fa1-4c22-9234-5804ca50669c) 2025-06-01 22:45:33.902427 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_44465191-0fa1-4c22-9234-5804ca50669c) 2025-06-01 22:45:33.902683 | orchestrator | 2025-06-01 22:45:33.903906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:33.904931 | orchestrator | Sunday 01 June 2025 22:45:33 +0000 (0:00:00.442) 0:00:53.766 *********** 2025-06-01 22:45:34.339146 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a931087f-71b6-44f2-a559-c8deb4b3c146) 2025-06-01 22:45:34.339346 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a931087f-71b6-44f2-a559-c8deb4b3c146) 2025-06-01 22:45:34.341345 | orchestrator | 2025-06-01 22:45:34.342748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:45:34.344552 | orchestrator | Sunday 01 June 2025 22:45:34 +0000 (0:00:00.437) 0:00:54.203 *********** 2025-06-01 22:45:34.700261 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 22:45:34.700993 | orchestrator | 2025-06-01 22:45:34.701872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:34.703243 | orchestrator | Sunday 01 June 2025 22:45:34 +0000 (0:00:00.358) 0:00:54.562 *********** 2025-06-01 22:45:35.141607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-01 22:45:35.142804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-01 22:45:35.143986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-01 22:45:35.145366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-01 22:45:35.146853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-01 22:45:35.148406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-01 22:45:35.148717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-01 22:45:35.149557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-01 22:45:35.150293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-01 22:45:35.150976 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-01 22:45:35.151741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-01 22:45:35.152500 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-01 22:45:35.153186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-01 22:45:35.153670 | orchestrator | 2025-06-01 22:45:35.154478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:35.154775 | orchestrator | Sunday 01 June 2025 22:45:35 +0000 (0:00:00.443) 0:00:55.006 *********** 2025-06-01 22:45:35.343030 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:35.343148 | orchestrator | 2025-06-01 22:45:35.344018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:35.345166 | orchestrator | Sunday 01 June 2025 22:45:35 +0000 (0:00:00.200) 0:00:55.207 *********** 2025-06-01 22:45:35.558583 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:35.560092 | orchestrator | 2025-06-01 22:45:35.560115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:35.560169 | orchestrator | Sunday 01 June 2025 22:45:35 +0000 (0:00:00.213) 0:00:55.420 *********** 2025-06-01 22:45:36.203210 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:36.204211 | orchestrator | 2025-06-01 22:45:36.204517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:36.206260 | orchestrator | Sunday 01 June 2025 22:45:36 +0000 (0:00:00.648) 0:00:56.069 *********** 2025-06-01 22:45:36.412419 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:36.412528 | orchestrator | 2025-06-01 22:45:36.413196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:36.415129 | orchestrator | Sunday 01 June 2025 22:45:36 +0000 (0:00:00.206) 0:00:56.276 *********** 2025-06-01 22:45:36.625275 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:36.625380 | orchestrator | 2025-06-01 22:45:36.625544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:36.627092 | orchestrator | Sunday 01 June 2025 22:45:36 +0000 (0:00:00.211) 0:00:56.487 *********** 2025-06-01 22:45:36.832471 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:36.833396 | orchestrator | 2025-06-01 22:45:36.834324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:36.834908 | orchestrator | Sunday 01 June 2025 22:45:36 +0000 (0:00:00.209) 0:00:56.697 *********** 2025-06-01 22:45:37.054180 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:37.054751 | orchestrator | 2025-06-01 22:45:37.056114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:37.057380 | orchestrator | Sunday 01 June 2025 22:45:37 +0000 (0:00:00.221) 0:00:56.918 *********** 2025-06-01 22:45:37.261544 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:37.262309 | orchestrator | 2025-06-01 22:45:37.263182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:37.264170 | orchestrator | Sunday 01 June 2025 22:45:37 +0000 (0:00:00.207) 0:00:57.126 *********** 2025-06-01 22:45:37.934659 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-01 22:45:37.935482 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-01 22:45:37.937133 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-01 22:45:37.937160 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-01 22:45:37.937422 | orchestrator | 2025-06-01 22:45:37.938264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:37.938994 | orchestrator | Sunday 01 June 2025 22:45:37 +0000 (0:00:00.671) 0:00:57.798 *********** 2025-06-01 22:45:38.145987 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:38.146158 | orchestrator | 2025-06-01 22:45:38.146929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:38.148159 | orchestrator | Sunday 01 June 2025 22:45:38 +0000 (0:00:00.213) 0:00:58.011 *********** 2025-06-01 22:45:38.344858 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:38.346471 | orchestrator | 2025-06-01 22:45:38.346843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:38.349157 | orchestrator | Sunday 01 June 2025 22:45:38 +0000 (0:00:00.198) 0:00:58.210 *********** 2025-06-01 22:45:38.548055 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:38.548568 | orchestrator | 2025-06-01 22:45:38.549592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:45:38.550372 | orchestrator | Sunday 01 June 2025 22:45:38 +0000 (0:00:00.195) 0:00:58.406 *********** 2025-06-01 22:45:38.742610 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:38.743083 | orchestrator | 2025-06-01 22:45:38.744741 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-01 22:45:38.745699 | orchestrator | Sunday 01 June 2025 22:45:38 +0000 (0:00:00.201) 0:00:58.607 *********** 2025-06-01 22:45:39.086570 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:39.087315 | orchestrator | 2025-06-01 22:45:39.088627 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-01 22:45:39.089275 | orchestrator | Sunday 01 June 2025 22:45:39 +0000 (0:00:00.342) 0:00:58.949 *********** 2025-06-01 22:45:39.275961 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83360607-213f-5c54-ae9b-aa580894d048'}}) 2025-06-01 22:45:39.276050 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c033fef4-2688-55e0-9ca7-53dbc156bc4e'}}) 2025-06-01 22:45:39.277113 | orchestrator | 2025-06-01 22:45:39.278288 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-01 22:45:39.279040 | orchestrator | Sunday 01 June 2025 22:45:39 +0000 (0:00:00.189) 0:00:59.139 *********** 2025-06-01 22:45:41.160335 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'}) 2025-06-01 22:45:41.160448 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'}) 2025-06-01 22:45:41.160975 | orchestrator | 2025-06-01 22:45:41.162789 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-01 22:45:41.163218 | orchestrator | Sunday 01 June 2025 22:45:41 +0000 (0:00:01.881) 0:01:01.021 *********** 2025-06-01 22:45:41.313855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:41.313943 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:41.314561 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:41.315344 | orchestrator | 2025-06-01 22:45:41.316010 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-01 22:45:41.317008 | orchestrator | Sunday 01 June 2025 22:45:41 +0000 (0:00:00.156) 0:01:01.177 *********** 2025-06-01 22:45:42.633468 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'}) 2025-06-01 22:45:42.633576 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'}) 2025-06-01 22:45:42.635716 | orchestrator | 2025-06-01 22:45:42.637052 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-01 22:45:42.637702 | orchestrator | Sunday 01 June 2025 22:45:42 +0000 (0:00:01.318) 0:01:02.496 *********** 2025-06-01 22:45:42.784931 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:42.785704 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:42.786608 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:42.787467 | orchestrator | 2025-06-01 22:45:42.789494 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-01 22:45:42.789519 | orchestrator | Sunday 01 June 2025 22:45:42 +0000 (0:00:00.154) 0:01:02.650 *********** 2025-06-01 22:45:42.927670 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:42.927911 | orchestrator | 2025-06-01 22:45:42.929053 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-01 22:45:42.929995 | orchestrator | Sunday 01 June 2025 22:45:42 +0000 (0:00:00.142) 0:01:02.792 *********** 2025-06-01 22:45:43.097480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:43.098457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:43.099540 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:43.100510 | orchestrator | 2025-06-01 22:45:43.101410 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-01 22:45:43.102109 | orchestrator | Sunday 01 June 2025 22:45:43 +0000 (0:00:00.169) 0:01:02.962 *********** 2025-06-01 22:45:43.240633 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:43.240836 | orchestrator | 2025-06-01 22:45:43.242105 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-01 22:45:43.243126 | orchestrator | Sunday 01 June 2025 22:45:43 +0000 (0:00:00.143) 0:01:03.105 *********** 2025-06-01 22:45:43.398849 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:43.399735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:43.400663 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:43.401829 | orchestrator | 2025-06-01 22:45:43.402862 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-01 22:45:43.403806 | orchestrator | Sunday 01 June 2025 22:45:43 +0000 (0:00:00.156) 0:01:03.262 *********** 2025-06-01 22:45:43.549101 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:43.549830 | orchestrator | 2025-06-01 22:45:43.551132 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-01 22:45:43.552683 | orchestrator | Sunday 01 June 2025 22:45:43 +0000 (0:00:00.151) 0:01:03.413 *********** 2025-06-01 22:45:43.697005 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:43.697160 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:43.698387 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:43.699531 | orchestrator | 2025-06-01 22:45:43.701334 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-01 22:45:43.701598 | orchestrator | Sunday 01 June 2025 22:45:43 +0000 (0:00:00.148) 0:01:03.561 *********** 2025-06-01 22:45:43.844138 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:45:43.844735 | orchestrator | 2025-06-01 22:45:43.845627 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-01 22:45:43.846707 | orchestrator | Sunday 01 June 2025 22:45:43 +0000 (0:00:00.146) 0:01:03.708 *********** 2025-06-01 22:45:44.216099 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:44.216747 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:44.218216 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:44.219301 | orchestrator | 2025-06-01 22:45:44.220114 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-01 22:45:44.220867 | orchestrator | Sunday 01 June 2025 22:45:44 +0000 (0:00:00.372) 0:01:04.081 *********** 2025-06-01 22:45:44.375561 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:44.376028 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:44.377054 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:44.377992 | orchestrator | 2025-06-01 22:45:44.378662 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-01 22:45:44.379867 | orchestrator | Sunday 01 June 2025 22:45:44 +0000 (0:00:00.157) 0:01:04.239 *********** 2025-06-01 22:45:44.531937 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:44.532108 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:44.532128 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:44.534469 | orchestrator | 2025-06-01 22:45:44.535522 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-01 22:45:44.536590 | orchestrator | Sunday 01 June 2025 22:45:44 +0000 (0:00:00.157) 0:01:04.397 *********** 2025-06-01 22:45:44.670641 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:44.672208 | orchestrator | 2025-06-01 22:45:44.673350 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-01 22:45:44.674215 | orchestrator | Sunday 01 June 2025 22:45:44 +0000 (0:00:00.138) 0:01:04.535 *********** 2025-06-01 22:45:44.819343 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:44.820561 | orchestrator | 2025-06-01 22:45:44.821603 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-01 22:45:44.822502 | orchestrator | Sunday 01 June 2025 22:45:44 +0000 (0:00:00.148) 0:01:04.684 *********** 2025-06-01 22:45:44.966192 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:44.966563 | orchestrator | 2025-06-01 22:45:44.967662 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-01 22:45:44.968992 | orchestrator | Sunday 01 June 2025 22:45:44 +0000 (0:00:00.145) 0:01:04.830 *********** 2025-06-01 22:45:45.122061 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 22:45:45.123278 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-01 22:45:45.124426 | orchestrator | } 2025-06-01 22:45:45.126133 | orchestrator | 2025-06-01 22:45:45.126927 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-01 22:45:45.127819 | orchestrator | Sunday 01 June 2025 22:45:45 +0000 (0:00:00.156) 0:01:04.987 *********** 2025-06-01 22:45:45.270305 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 22:45:45.271596 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-01 22:45:45.272686 | orchestrator | } 2025-06-01 22:45:45.273249 | orchestrator | 2025-06-01 22:45:45.274997 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-01 22:45:45.275798 | orchestrator | Sunday 01 June 2025 22:45:45 +0000 (0:00:00.148) 0:01:05.135 *********** 2025-06-01 22:45:45.423590 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 22:45:45.425358 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-01 22:45:45.426252 | orchestrator | } 2025-06-01 22:45:45.427464 | orchestrator | 2025-06-01 22:45:45.428463 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-01 22:45:45.429242 | orchestrator | Sunday 01 June 2025 22:45:45 +0000 (0:00:00.151) 0:01:05.287 *********** 2025-06-01 22:45:45.945965 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:45:45.946174 | orchestrator | 2025-06-01 22:45:45.946254 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-01 22:45:45.946567 | orchestrator | Sunday 01 June 2025 22:45:45 +0000 (0:00:00.522) 0:01:05.809 *********** 2025-06-01 22:45:46.455730 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:45:46.456844 | orchestrator | 2025-06-01 22:45:46.456951 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-01 22:45:46.457608 | orchestrator | Sunday 01 June 2025 22:45:46 +0000 (0:00:00.508) 0:01:06.318 *********** 2025-06-01 22:45:46.969019 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:45:46.969560 | orchestrator | 2025-06-01 22:45:46.970712 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-01 22:45:46.970969 | orchestrator | Sunday 01 June 2025 22:45:46 +0000 (0:00:00.513) 0:01:06.832 *********** 2025-06-01 22:45:47.323401 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:45:47.323581 | orchestrator | 2025-06-01 22:45:47.324467 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-01 22:45:47.325141 | orchestrator | Sunday 01 June 2025 22:45:47 +0000 (0:00:00.355) 0:01:07.187 *********** 2025-06-01 22:45:47.426285 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:47.427410 | orchestrator | 2025-06-01 22:45:47.428540 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-01 22:45:47.428642 | orchestrator | Sunday 01 June 2025 22:45:47 +0000 (0:00:00.103) 0:01:07.291 *********** 2025-06-01 22:45:47.539098 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:47.539961 | orchestrator | 2025-06-01 22:45:47.541402 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-01 22:45:47.541509 | orchestrator | Sunday 01 June 2025 22:45:47 +0000 (0:00:00.111) 0:01:07.403 *********** 2025-06-01 22:45:47.685905 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 22:45:47.688690 | orchestrator |  "vgs_report": { 2025-06-01 22:45:47.690141 | orchestrator |  "vg": [] 2025-06-01 22:45:47.691661 | orchestrator |  } 2025-06-01 22:45:47.693072 | orchestrator | } 2025-06-01 22:45:47.694637 | orchestrator | 2025-06-01 22:45:47.695653 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-01 22:45:47.696831 | orchestrator | Sunday 01 June 2025 22:45:47 +0000 (0:00:00.147) 0:01:07.550 *********** 2025-06-01 22:45:47.819917 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:47.820356 | orchestrator | 2025-06-01 22:45:47.821797 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-01 22:45:47.822653 | orchestrator | Sunday 01 June 2025 22:45:47 +0000 (0:00:00.133) 0:01:07.684 *********** 2025-06-01 22:45:47.947677 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:47.948776 | orchestrator | 2025-06-01 22:45:47.949185 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-01 22:45:47.950602 | orchestrator | Sunday 01 June 2025 22:45:47 +0000 (0:00:00.127) 0:01:07.812 *********** 2025-06-01 22:45:48.084454 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:48.086211 | orchestrator | 2025-06-01 22:45:48.087692 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-01 22:45:48.088240 | orchestrator | Sunday 01 June 2025 22:45:48 +0000 (0:00:00.136) 0:01:07.948 *********** 2025-06-01 22:45:48.230823 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:48.231530 | orchestrator | 2025-06-01 22:45:48.233054 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-01 22:45:48.234139 | orchestrator | Sunday 01 June 2025 22:45:48 +0000 (0:00:00.147) 0:01:08.096 *********** 2025-06-01 22:45:48.377470 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:48.378277 | orchestrator | 2025-06-01 22:45:48.378848 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-01 22:45:48.381470 | orchestrator | Sunday 01 June 2025 22:45:48 +0000 (0:00:00.146) 0:01:08.242 *********** 2025-06-01 22:45:48.500716 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:48.502335 | orchestrator | 2025-06-01 22:45:48.503155 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-01 22:45:48.504353 | orchestrator | Sunday 01 June 2025 22:45:48 +0000 (0:00:00.122) 0:01:08.365 *********** 2025-06-01 22:45:48.638848 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:48.640027 | orchestrator | 2025-06-01 22:45:48.640941 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-01 22:45:48.641598 | orchestrator | Sunday 01 June 2025 22:45:48 +0000 (0:00:00.137) 0:01:08.502 *********** 2025-06-01 22:45:48.774126 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:48.774324 | orchestrator | 2025-06-01 22:45:48.777127 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-01 22:45:48.777715 | orchestrator | Sunday 01 June 2025 22:45:48 +0000 (0:00:00.136) 0:01:08.638 *********** 2025-06-01 22:45:49.116707 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:49.116894 | orchestrator | 2025-06-01 22:45:49.117023 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-01 22:45:49.117675 | orchestrator | Sunday 01 June 2025 22:45:49 +0000 (0:00:00.343) 0:01:08.982 *********** 2025-06-01 22:45:49.255129 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:49.256618 | orchestrator | 2025-06-01 22:45:49.257336 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-01 22:45:49.258091 | orchestrator | Sunday 01 June 2025 22:45:49 +0000 (0:00:00.138) 0:01:09.120 *********** 2025-06-01 22:45:49.397199 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:49.398329 | orchestrator | 2025-06-01 22:45:49.399195 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-01 22:45:49.400156 | orchestrator | Sunday 01 June 2025 22:45:49 +0000 (0:00:00.140) 0:01:09.261 *********** 2025-06-01 22:45:49.580613 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:49.581588 | orchestrator | 2025-06-01 22:45:49.582230 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-01 22:45:49.583390 | orchestrator | Sunday 01 June 2025 22:45:49 +0000 (0:00:00.182) 0:01:09.444 *********** 2025-06-01 22:45:49.742395 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:49.744036 | orchestrator | 2025-06-01 22:45:49.744129 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-01 22:45:49.745517 | orchestrator | Sunday 01 June 2025 22:45:49 +0000 (0:00:00.163) 0:01:09.608 *********** 2025-06-01 22:45:49.904214 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:49.905163 | orchestrator | 2025-06-01 22:45:49.906494 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-01 22:45:49.907836 | orchestrator | Sunday 01 June 2025 22:45:49 +0000 (0:00:00.160) 0:01:09.768 *********** 2025-06-01 22:45:50.068451 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:50.069024 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:50.069720 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:50.070371 | orchestrator | 2025-06-01 22:45:50.071095 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-01 22:45:50.071905 | orchestrator | Sunday 01 June 2025 22:45:50 +0000 (0:00:00.165) 0:01:09.934 *********** 2025-06-01 22:45:50.213056 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:50.214146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:50.214984 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:50.216597 | orchestrator | 2025-06-01 22:45:50.216820 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-01 22:45:50.217848 | orchestrator | Sunday 01 June 2025 22:45:50 +0000 (0:00:00.143) 0:01:10.078 *********** 2025-06-01 22:45:50.364655 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:50.364829 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:50.365270 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:50.366254 | orchestrator | 2025-06-01 22:45:50.366351 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-01 22:45:50.367398 | orchestrator | Sunday 01 June 2025 22:45:50 +0000 (0:00:00.152) 0:01:10.230 *********** 2025-06-01 22:45:50.512270 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:50.513406 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:50.513950 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:50.515368 | orchestrator | 2025-06-01 22:45:50.515990 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-01 22:45:50.516528 | orchestrator | Sunday 01 June 2025 22:45:50 +0000 (0:00:00.144) 0:01:10.375 *********** 2025-06-01 22:45:50.667610 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:50.667833 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:50.668873 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:50.669654 | orchestrator | 2025-06-01 22:45:50.670346 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-01 22:45:50.671248 | orchestrator | Sunday 01 June 2025 22:45:50 +0000 (0:00:00.157) 0:01:10.533 *********** 2025-06-01 22:45:50.817817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:50.818013 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:50.819047 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:50.819542 | orchestrator | 2025-06-01 22:45:50.820020 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-01 22:45:50.820489 | orchestrator | Sunday 01 June 2025 22:45:50 +0000 (0:00:00.149) 0:01:10.682 *********** 2025-06-01 22:45:51.192287 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:51.195864 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:51.196434 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:51.197508 | orchestrator | 2025-06-01 22:45:51.198969 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-01 22:45:51.200907 | orchestrator | Sunday 01 June 2025 22:45:51 +0000 (0:00:00.374) 0:01:11.057 *********** 2025-06-01 22:45:51.351230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:51.352288 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:51.353018 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:51.354435 | orchestrator | 2025-06-01 22:45:51.355686 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-01 22:45:51.356477 | orchestrator | Sunday 01 June 2025 22:45:51 +0000 (0:00:00.159) 0:01:11.216 *********** 2025-06-01 22:45:51.856112 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:45:51.856961 | orchestrator | 2025-06-01 22:45:51.857865 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-01 22:45:51.858607 | orchestrator | Sunday 01 June 2025 22:45:51 +0000 (0:00:00.503) 0:01:11.720 *********** 2025-06-01 22:45:52.379033 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:45:52.379617 | orchestrator | 2025-06-01 22:45:52.380628 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-01 22:45:52.381500 | orchestrator | Sunday 01 June 2025 22:45:52 +0000 (0:00:00.520) 0:01:12.241 *********** 2025-06-01 22:45:52.520544 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:45:52.521545 | orchestrator | 2025-06-01 22:45:52.523034 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-01 22:45:52.523613 | orchestrator | Sunday 01 June 2025 22:45:52 +0000 (0:00:00.143) 0:01:12.384 *********** 2025-06-01 22:45:52.698831 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'vg_name': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'}) 2025-06-01 22:45:52.699520 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'vg_name': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'}) 2025-06-01 22:45:52.701315 | orchestrator | 2025-06-01 22:45:52.701715 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-01 22:45:52.702782 | orchestrator | Sunday 01 June 2025 22:45:52 +0000 (0:00:00.176) 0:01:12.560 *********** 2025-06-01 22:45:52.848432 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:52.848558 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:52.849827 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:52.852336 | orchestrator | 2025-06-01 22:45:52.852622 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-01 22:45:52.853506 | orchestrator | Sunday 01 June 2025 22:45:52 +0000 (0:00:00.151) 0:01:12.712 *********** 2025-06-01 22:45:53.015490 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:53.018874 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:53.021281 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:53.022672 | orchestrator | 2025-06-01 22:45:53.025427 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-01 22:45:53.026466 | orchestrator | Sunday 01 June 2025 22:45:53 +0000 (0:00:00.166) 0:01:12.879 *********** 2025-06-01 22:45:53.161587 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'})  2025-06-01 22:45:53.161888 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'})  2025-06-01 22:45:53.163589 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:45:53.165712 | orchestrator | 2025-06-01 22:45:53.166319 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-01 22:45:53.167273 | orchestrator | Sunday 01 June 2025 22:45:53 +0000 (0:00:00.147) 0:01:13.026 *********** 2025-06-01 22:45:53.312038 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 22:45:53.313663 | orchestrator |  "lvm_report": { 2025-06-01 22:45:53.314098 | orchestrator |  "lv": [ 2025-06-01 22:45:53.315814 | orchestrator |  { 2025-06-01 22:45:53.316463 | orchestrator |  "lv_name": "osd-block-83360607-213f-5c54-ae9b-aa580894d048", 2025-06-01 22:45:53.317793 | orchestrator |  "vg_name": "ceph-83360607-213f-5c54-ae9b-aa580894d048" 2025-06-01 22:45:53.319275 | orchestrator |  }, 2025-06-01 22:45:53.320183 | orchestrator |  { 2025-06-01 22:45:53.321035 | orchestrator |  "lv_name": "osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e", 2025-06-01 22:45:53.321958 | orchestrator |  "vg_name": "ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e" 2025-06-01 22:45:53.322946 | orchestrator |  } 2025-06-01 22:45:53.323868 | orchestrator |  ], 2025-06-01 22:45:53.324546 | orchestrator |  "pv": [ 2025-06-01 22:45:53.324802 | orchestrator |  { 2025-06-01 22:45:53.325825 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-01 22:45:53.326992 | orchestrator |  "vg_name": "ceph-83360607-213f-5c54-ae9b-aa580894d048" 2025-06-01 22:45:53.327633 | orchestrator |  }, 2025-06-01 22:45:53.328256 | orchestrator |  { 2025-06-01 22:45:53.329345 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-01 22:45:53.330889 | orchestrator |  "vg_name": "ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e" 2025-06-01 22:45:53.331877 | orchestrator |  } 2025-06-01 22:45:53.332612 | orchestrator |  ] 2025-06-01 22:45:53.333315 | orchestrator |  } 2025-06-01 22:45:53.334264 | orchestrator | } 2025-06-01 22:45:53.335255 | orchestrator | 2025-06-01 22:45:53.336544 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:45:53.336612 | orchestrator | 2025-06-01 22:45:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:45:53.336628 | orchestrator | 2025-06-01 22:45:53 | INFO  | Please wait and do not abort execution. 2025-06-01 22:45:53.337325 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-01 22:45:53.337589 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-01 22:45:53.338518 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-01 22:45:53.339390 | orchestrator | 2025-06-01 22:45:53.340034 | orchestrator | 2025-06-01 22:45:53.340972 | orchestrator | 2025-06-01 22:45:53.341404 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:45:53.342252 | orchestrator | Sunday 01 June 2025 22:45:53 +0000 (0:00:00.149) 0:01:13.176 *********** 2025-06-01 22:45:53.342958 | orchestrator | =============================================================================== 2025-06-01 22:45:53.343411 | orchestrator | Create block VGs -------------------------------------------------------- 5.74s 2025-06-01 22:45:53.344628 | orchestrator | Create block LVs -------------------------------------------------------- 4.07s 2025-06-01 22:45:53.345431 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.88s 2025-06-01 22:45:53.345973 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.57s 2025-06-01 22:45:53.346732 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2025-06-01 22:45:53.347169 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2025-06-01 22:45:53.347996 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2025-06-01 22:45:53.348356 | orchestrator | Add known partitions to the list of available block devices ------------- 1.49s 2025-06-01 22:45:53.349965 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2025-06-01 22:45:53.351045 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2025-06-01 22:45:53.352850 | orchestrator | Print LVM report data --------------------------------------------------- 0.94s 2025-06-01 22:45:53.354081 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-06-01 22:45:53.355532 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2025-06-01 22:45:53.356653 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-06-01 22:45:53.357330 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.71s 2025-06-01 22:45:53.358153 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.71s 2025-06-01 22:45:53.359021 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-06-01 22:45:53.359832 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-06-01 22:45:53.361387 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.69s 2025-06-01 22:45:53.362153 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.68s 2025-06-01 22:45:55.681413 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:45:55.681521 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:45:55.681538 | orchestrator | Registering Redlock._release_script 2025-06-01 22:45:55.762007 | orchestrator | 2025-06-01 22:45:55 | INFO  | Task 824b905d-f3a2-4d1b-a437-c4d40df3fd07 (facts) was prepared for execution. 2025-06-01 22:45:55.762144 | orchestrator | 2025-06-01 22:45:55 | INFO  | It takes a moment until task 824b905d-f3a2-4d1b-a437-c4d40df3fd07 (facts) has been started and output is visible here. 2025-06-01 22:45:59.857787 | orchestrator | 2025-06-01 22:45:59.857902 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-01 22:45:59.857919 | orchestrator | 2025-06-01 22:45:59.859278 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-01 22:45:59.860160 | orchestrator | Sunday 01 June 2025 22:45:59 +0000 (0:00:00.286) 0:00:00.286 *********** 2025-06-01 22:46:00.986849 | orchestrator | ok: [testbed-manager] 2025-06-01 22:46:00.986940 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:46:00.988563 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:46:00.989584 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:46:00.994173 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:46:00.994189 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:46:00.994195 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:46:00.994554 | orchestrator | 2025-06-01 22:46:00.995961 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-01 22:46:00.997108 | orchestrator | Sunday 01 June 2025 22:46:00 +0000 (0:00:01.131) 0:00:01.417 *********** 2025-06-01 22:46:01.149036 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:46:01.229304 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:46:01.309868 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:46:01.399733 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:46:01.495939 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:46:02.230584 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:46:02.233546 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:46:02.233597 | orchestrator | 2025-06-01 22:46:02.233613 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 22:46:02.235015 | orchestrator | 2025-06-01 22:46:02.235657 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 22:46:02.236831 | orchestrator | Sunday 01 June 2025 22:46:02 +0000 (0:00:01.245) 0:00:02.662 *********** 2025-06-01 22:46:07.170532 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:46:07.170988 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:46:07.172507 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:46:07.172926 | orchestrator | ok: [testbed-manager] 2025-06-01 22:46:07.173716 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:46:07.174258 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:46:07.174980 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:46:07.177975 | orchestrator | 2025-06-01 22:46:07.177999 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-01 22:46:07.178098 | orchestrator | 2025-06-01 22:46:07.178115 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-01 22:46:07.178181 | orchestrator | Sunday 01 June 2025 22:46:07 +0000 (0:00:04.943) 0:00:07.606 *********** 2025-06-01 22:46:07.328718 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:46:07.406244 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:46:07.489295 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:46:07.559534 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:46:07.637766 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:46:07.682160 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:46:07.683271 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:46:07.685340 | orchestrator | 2025-06-01 22:46:07.687525 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:46:07.687571 | orchestrator | 2025-06-01 22:46:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:46:07.687586 | orchestrator | 2025-06-01 22:46:07 | INFO  | Please wait and do not abort execution. 2025-06-01 22:46:07.688501 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:46:07.691403 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:46:07.692376 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:46:07.693344 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:46:07.694113 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:46:07.694848 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:46:07.695573 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:46:07.695754 | orchestrator | 2025-06-01 22:46:07.696729 | orchestrator | 2025-06-01 22:46:07.697163 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:46:07.698176 | orchestrator | Sunday 01 June 2025 22:46:07 +0000 (0:00:00.511) 0:00:08.118 *********** 2025-06-01 22:46:07.698636 | orchestrator | =============================================================================== 2025-06-01 22:46:07.699176 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.94s 2025-06-01 22:46:07.699578 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2025-06-01 22:46:07.700667 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2025-06-01 22:46:07.701817 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-06-01 22:46:08.344005 | orchestrator | 2025-06-01 22:46:08.345844 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Jun 1 22:46:08 UTC 2025 2025-06-01 22:46:08.345874 | orchestrator | 2025-06-01 22:46:10.055835 | orchestrator | 2025-06-01 22:46:10 | INFO  | Collection nutshell is prepared for execution 2025-06-01 22:46:10.055929 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [0] - dotfiles 2025-06-01 22:46:10.060981 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:46:10.061005 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:46:10.061016 | orchestrator | Registering Redlock._release_script 2025-06-01 22:46:10.066252 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [0] - homer 2025-06-01 22:46:10.066274 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [0] - netdata 2025-06-01 22:46:10.066285 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [0] - openstackclient 2025-06-01 22:46:10.066296 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [0] - phpmyadmin 2025-06-01 22:46:10.066306 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [0] - common 2025-06-01 22:46:10.068094 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [1] -- loadbalancer 2025-06-01 22:46:10.068113 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [2] --- opensearch 2025-06-01 22:46:10.068288 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [2] --- mariadb-ng 2025-06-01 22:46:10.068375 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [3] ---- horizon 2025-06-01 22:46:10.068455 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [3] ---- keystone 2025-06-01 22:46:10.068529 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [4] ----- neutron 2025-06-01 22:46:10.068608 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [5] ------ wait-for-nova 2025-06-01 22:46:10.068684 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [5] ------ octavia 2025-06-01 22:46:10.069663 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [4] ----- barbican 2025-06-01 22:46:10.069702 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [4] ----- designate 2025-06-01 22:46:10.069722 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [4] ----- ironic 2025-06-01 22:46:10.069764 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [4] ----- placement 2025-06-01 22:46:10.069782 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [4] ----- magnum 2025-06-01 22:46:10.070158 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [1] -- openvswitch 2025-06-01 22:46:10.070261 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [2] --- ovn 2025-06-01 22:46:10.070286 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [1] -- memcached 2025-06-01 22:46:10.070354 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [1] -- redis 2025-06-01 22:46:10.070367 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [1] -- rabbitmq-ng 2025-06-01 22:46:10.070603 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [0] - kubernetes 2025-06-01 22:46:10.072351 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [1] -- kubeconfig 2025-06-01 22:46:10.072381 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [1] -- copy-kubeconfig 2025-06-01 22:46:10.072451 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [0] - ceph 2025-06-01 22:46:10.074351 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [1] -- ceph-pools 2025-06-01 22:46:10.074381 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [2] --- copy-ceph-keys 2025-06-01 22:46:10.074391 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [3] ---- cephclient 2025-06-01 22:46:10.074401 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-01 22:46:10.074420 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [4] ----- wait-for-keystone 2025-06-01 22:46:10.074506 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-01 22:46:10.074521 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [5] ------ glance 2025-06-01 22:46:10.074595 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [5] ------ cinder 2025-06-01 22:46:10.074659 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [5] ------ nova 2025-06-01 22:46:10.075041 | orchestrator | 2025-06-01 22:46:10 | INFO  | A [4] ----- prometheus 2025-06-01 22:46:10.075060 | orchestrator | 2025-06-01 22:46:10 | INFO  | D [5] ------ grafana 2025-06-01 22:46:10.253111 | orchestrator | 2025-06-01 22:46:10 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-01 22:46:10.253289 | orchestrator | 2025-06-01 22:46:10 | INFO  | Tasks are running in the background 2025-06-01 22:46:12.872410 | orchestrator | 2025-06-01 22:46:12 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-01 22:46:15.016611 | orchestrator | 2025-06-01 22:46:15 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:15.016896 | orchestrator | 2025-06-01 22:46:15 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:15.019186 | orchestrator | 2025-06-01 22:46:15 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:15.020724 | orchestrator | 2025-06-01 22:46:15 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:15.021224 | orchestrator | 2025-06-01 22:46:15 | INFO  | Task 41275acd-3375-48ee-9835-0e3dda971a30 is in state STARTED 2025-06-01 22:46:15.026122 | orchestrator | 2025-06-01 22:46:15 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:15.026169 | orchestrator | 2025-06-01 22:46:15 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:15.026182 | orchestrator | 2025-06-01 22:46:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:18.077725 | orchestrator | 2025-06-01 22:46:18 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:18.080338 | orchestrator | 2025-06-01 22:46:18 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:18.080789 | orchestrator | 2025-06-01 22:46:18 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:18.081261 | orchestrator | 2025-06-01 22:46:18 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:18.084226 | orchestrator | 2025-06-01 22:46:18 | INFO  | Task 41275acd-3375-48ee-9835-0e3dda971a30 is in state STARTED 2025-06-01 22:46:18.084975 | orchestrator | 2025-06-01 22:46:18 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:18.090546 | orchestrator | 2025-06-01 22:46:18 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:18.090585 | orchestrator | 2025-06-01 22:46:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:21.124503 | orchestrator | 2025-06-01 22:46:21 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:21.124643 | orchestrator | 2025-06-01 22:46:21 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:21.124791 | orchestrator | 2025-06-01 22:46:21 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:21.125173 | orchestrator | 2025-06-01 22:46:21 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:21.125588 | orchestrator | 2025-06-01 22:46:21 | INFO  | Task 41275acd-3375-48ee-9835-0e3dda971a30 is in state STARTED 2025-06-01 22:46:21.126262 | orchestrator | 2025-06-01 22:46:21 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:21.128673 | orchestrator | 2025-06-01 22:46:21 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:21.128704 | orchestrator | 2025-06-01 22:46:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:24.171697 | orchestrator | 2025-06-01 22:46:24 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:24.171844 | orchestrator | 2025-06-01 22:46:24 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:24.174301 | orchestrator | 2025-06-01 22:46:24 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:24.175609 | orchestrator | 2025-06-01 22:46:24 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:24.176138 | orchestrator | 2025-06-01 22:46:24 | INFO  | Task 41275acd-3375-48ee-9835-0e3dda971a30 is in state STARTED 2025-06-01 22:46:24.185775 | orchestrator | 2025-06-01 22:46:24 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:24.186092 | orchestrator | 2025-06-01 22:46:24 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:24.186120 | orchestrator | 2025-06-01 22:46:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:27.256943 | orchestrator | 2025-06-01 22:46:27 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:27.257332 | orchestrator | 2025-06-01 22:46:27 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:27.258089 | orchestrator | 2025-06-01 22:46:27 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:27.258704 | orchestrator | 2025-06-01 22:46:27 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:27.261487 | orchestrator | 2025-06-01 22:46:27 | INFO  | Task 41275acd-3375-48ee-9835-0e3dda971a30 is in state STARTED 2025-06-01 22:46:27.262180 | orchestrator | 2025-06-01 22:46:27 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:27.262883 | orchestrator | 2025-06-01 22:46:27 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:27.263054 | orchestrator | 2025-06-01 22:46:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:30.336804 | orchestrator | 2025-06-01 22:46:30 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:30.340985 | orchestrator | 2025-06-01 22:46:30 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:30.341377 | orchestrator | 2025-06-01 22:46:30 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:30.342103 | orchestrator | 2025-06-01 22:46:30 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:30.345089 | orchestrator | 2025-06-01 22:46:30 | INFO  | Task 41275acd-3375-48ee-9835-0e3dda971a30 is in state STARTED 2025-06-01 22:46:30.345334 | orchestrator | 2025-06-01 22:46:30 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:30.350068 | orchestrator | 2025-06-01 22:46:30 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:30.350100 | orchestrator | 2025-06-01 22:46:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:33.412369 | orchestrator | 2025-06-01 22:46:33 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:33.413263 | orchestrator | 2025-06-01 22:46:33 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:33.413297 | orchestrator | 2025-06-01 22:46:33 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:33.413310 | orchestrator | 2025-06-01 22:46:33 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:33.413322 | orchestrator | 2025-06-01 22:46:33 | INFO  | Task 41275acd-3375-48ee-9835-0e3dda971a30 is in state STARTED 2025-06-01 22:46:33.413333 | orchestrator | 2025-06-01 22:46:33 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:33.413344 | orchestrator | 2025-06-01 22:46:33 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:33.413356 | orchestrator | 2025-06-01 22:46:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:36.459383 | orchestrator | 2025-06-01 22:46:36 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:36.462541 | orchestrator | 2025-06-01 22:46:36 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:36.463066 | orchestrator | 2025-06-01 22:46:36 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:36.466536 | orchestrator | 2025-06-01 22:46:36 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:36.470479 | orchestrator | 2025-06-01 22:46:36 | INFO  | Task 41275acd-3375-48ee-9835-0e3dda971a30 is in state STARTED 2025-06-01 22:46:36.474087 | orchestrator | 2025-06-01 22:46:36 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:36.477027 | orchestrator | 2025-06-01 22:46:36 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:36.477069 | orchestrator | 2025-06-01 22:46:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:39.542882 | orchestrator | 2025-06-01 22:46:39 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:39.551776 | orchestrator | 2025-06-01 22:46:39 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:39.553012 | orchestrator | 2025-06-01 22:46:39 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:39.554214 | orchestrator | 2025-06-01 22:46:39 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:39.554251 | orchestrator | 2025-06-01 22:46:39 | INFO  | Task 41275acd-3375-48ee-9835-0e3dda971a30 is in state SUCCESS 2025-06-01 22:46:39.554427 | orchestrator | 2025-06-01 22:46:39.554447 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-01 22:46:39.554460 | orchestrator | 2025-06-01 22:46:39.554471 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-01 22:46:39.554482 | orchestrator | Sunday 01 June 2025 22:46:21 +0000 (0:00:00.668) 0:00:00.668 *********** 2025-06-01 22:46:39.554493 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:46:39.554506 | orchestrator | changed: [testbed-manager] 2025-06-01 22:46:39.554517 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:46:39.554528 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:46:39.554539 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:46:39.554549 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:46:39.554560 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:46:39.554571 | orchestrator | 2025-06-01 22:46:39.554582 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-01 22:46:39.554593 | orchestrator | Sunday 01 June 2025 22:46:26 +0000 (0:00:04.616) 0:00:05.285 *********** 2025-06-01 22:46:39.554604 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-01 22:46:39.554616 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-01 22:46:39.554627 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-01 22:46:39.554637 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-01 22:46:39.554648 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-01 22:46:39.554660 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-01 22:46:39.554671 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-01 22:46:39.554696 | orchestrator | 2025-06-01 22:46:39.554708 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-01 22:46:39.554743 | orchestrator | Sunday 01 June 2025 22:46:28 +0000 (0:00:02.261) 0:00:07.547 *********** 2025-06-01 22:46:39.554767 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 22:46:27.400903', 'end': '2025-06-01 22:46:27.410894', 'delta': '0:00:00.009991', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 22:46:39.554803 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 22:46:27.404469', 'end': '2025-06-01 22:46:27.414270', 'delta': '0:00:00.009801', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 22:46:39.554816 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 22:46:27.400505', 'end': '2025-06-01 22:46:27.409046', 'delta': '0:00:00.008541', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 22:46:39.554849 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 22:46:27.343345', 'end': '2025-06-01 22:46:27.349569', 'delta': '0:00:00.006224', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 22:46:39.554861 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 22:46:27.907944', 'end': '2025-06-01 22:46:27.917361', 'delta': '0:00:00.009417', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 22:46:39.554878 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 22:46:28.240116', 'end': '2025-06-01 22:46:28.249774', 'delta': '0:00:00.009658', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 22:46:39.554904 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 22:46:28.500872', 'end': '2025-06-01 22:46:28.511756', 'delta': '0:00:00.010884', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 22:46:39.554916 | orchestrator | 2025-06-01 22:46:39.554940 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-01 22:46:39.554990 | orchestrator | Sunday 01 June 2025 22:46:30 +0000 (0:00:01.973) 0:00:09.523 *********** 2025-06-01 22:46:39.555001 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-01 22:46:39.555012 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-01 22:46:39.555023 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-01 22:46:39.555034 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-01 22:46:39.555046 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-01 22:46:39.555071 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-01 22:46:39.555083 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-01 22:46:39.555096 | orchestrator | 2025-06-01 22:46:39.555109 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-01 22:46:39.555121 | orchestrator | Sunday 01 June 2025 22:46:33 +0000 (0:00:02.347) 0:00:11.870 *********** 2025-06-01 22:46:39.555134 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-01 22:46:39.555147 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-01 22:46:39.555160 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-01 22:46:39.555173 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-01 22:46:39.555185 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-01 22:46:39.555198 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-01 22:46:39.555222 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-01 22:46:39.555235 | orchestrator | 2025-06-01 22:46:39.555247 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:46:39.555269 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:46:39.555284 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:46:39.555297 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:46:39.555310 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:46:39.555330 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:46:39.555361 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:46:39.555399 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:46:39.555434 | orchestrator | 2025-06-01 22:46:39.555454 | orchestrator | 2025-06-01 22:46:39.555472 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:46:39.555491 | orchestrator | Sunday 01 June 2025 22:46:36 +0000 (0:00:03.225) 0:00:15.095 *********** 2025-06-01 22:46:39.555516 | orchestrator | =============================================================================== 2025-06-01 22:46:39.555527 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.62s 2025-06-01 22:46:39.555538 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.23s 2025-06-01 22:46:39.555548 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.35s 2025-06-01 22:46:39.555559 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.26s 2025-06-01 22:46:39.555570 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.98s 2025-06-01 22:46:39.555657 | orchestrator | 2025-06-01 22:46:39 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:39.556138 | orchestrator | 2025-06-01 22:46:39 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:39.556552 | orchestrator | 2025-06-01 22:46:39 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:46:39.557262 | orchestrator | 2025-06-01 22:46:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:42.619599 | orchestrator | 2025-06-01 22:46:42 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:42.623779 | orchestrator | 2025-06-01 22:46:42 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:42.629381 | orchestrator | 2025-06-01 22:46:42 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:42.642148 | orchestrator | 2025-06-01 22:46:42 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:42.646558 | orchestrator | 2025-06-01 22:46:42 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:42.658163 | orchestrator | 2025-06-01 22:46:42 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:42.665124 | orchestrator | 2025-06-01 22:46:42 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:46:42.665169 | orchestrator | 2025-06-01 22:46:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:45.724077 | orchestrator | 2025-06-01 22:46:45 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:45.725274 | orchestrator | 2025-06-01 22:46:45 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:45.729917 | orchestrator | 2025-06-01 22:46:45 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:45.729966 | orchestrator | 2025-06-01 22:46:45 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:45.729979 | orchestrator | 2025-06-01 22:46:45 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:45.730348 | orchestrator | 2025-06-01 22:46:45 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:45.730397 | orchestrator | 2025-06-01 22:46:45 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:46:45.730410 | orchestrator | 2025-06-01 22:46:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:48.811952 | orchestrator | 2025-06-01 22:46:48 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:48.820039 | orchestrator | 2025-06-01 22:46:48 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:48.825396 | orchestrator | 2025-06-01 22:46:48 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:48.829624 | orchestrator | 2025-06-01 22:46:48 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:48.831960 | orchestrator | 2025-06-01 22:46:48 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:48.834985 | orchestrator | 2025-06-01 22:46:48 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:48.835867 | orchestrator | 2025-06-01 22:46:48 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:46:48.835892 | orchestrator | 2025-06-01 22:46:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:51.887503 | orchestrator | 2025-06-01 22:46:51 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:51.887647 | orchestrator | 2025-06-01 22:46:51 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:51.887789 | orchestrator | 2025-06-01 22:46:51 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:51.888704 | orchestrator | 2025-06-01 22:46:51 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:51.890102 | orchestrator | 2025-06-01 22:46:51 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:51.891148 | orchestrator | 2025-06-01 22:46:51 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:51.891550 | orchestrator | 2025-06-01 22:46:51 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:46:51.892440 | orchestrator | 2025-06-01 22:46:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:54.999976 | orchestrator | 2025-06-01 22:46:54 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:55.006368 | orchestrator | 2025-06-01 22:46:55 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:55.006781 | orchestrator | 2025-06-01 22:46:55 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state STARTED 2025-06-01 22:46:55.007785 | orchestrator | 2025-06-01 22:46:55 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:55.010930 | orchestrator | 2025-06-01 22:46:55 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:55.014514 | orchestrator | 2025-06-01 22:46:55 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:55.018230 | orchestrator | 2025-06-01 22:46:55 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:46:55.022355 | orchestrator | 2025-06-01 22:46:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:46:58.058399 | orchestrator | 2025-06-01 22:46:58 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:46:58.062112 | orchestrator | 2025-06-01 22:46:58 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:46:58.062298 | orchestrator | 2025-06-01 22:46:58 | INFO  | Task 5eaadcd8-5ba1-4884-8726-ec3951c86342 is in state SUCCESS 2025-06-01 22:46:58.067024 | orchestrator | 2025-06-01 22:46:58 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:46:58.068990 | orchestrator | 2025-06-01 22:46:58 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:46:58.071946 | orchestrator | 2025-06-01 22:46:58 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:46:58.075337 | orchestrator | 2025-06-01 22:46:58 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:46:58.075359 | orchestrator | 2025-06-01 22:46:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:01.124587 | orchestrator | 2025-06-01 22:47:01 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:47:01.124809 | orchestrator | 2025-06-01 22:47:01 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:01.126864 | orchestrator | 2025-06-01 22:47:01 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:01.126896 | orchestrator | 2025-06-01 22:47:01 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:47:01.137103 | orchestrator | 2025-06-01 22:47:01 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:01.137149 | orchestrator | 2025-06-01 22:47:01 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:01.137163 | orchestrator | 2025-06-01 22:47:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:04.182882 | orchestrator | 2025-06-01 22:47:04 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:47:04.182986 | orchestrator | 2025-06-01 22:47:04 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:04.183001 | orchestrator | 2025-06-01 22:47:04 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:04.188562 | orchestrator | 2025-06-01 22:47:04 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:47:04.190374 | orchestrator | 2025-06-01 22:47:04 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:04.195379 | orchestrator | 2025-06-01 22:47:04 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:04.195454 | orchestrator | 2025-06-01 22:47:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:07.281516 | orchestrator | 2025-06-01 22:47:07 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:47:07.284306 | orchestrator | 2025-06-01 22:47:07 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:07.284345 | orchestrator | 2025-06-01 22:47:07 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:07.284358 | orchestrator | 2025-06-01 22:47:07 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:47:07.285484 | orchestrator | 2025-06-01 22:47:07 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:07.287105 | orchestrator | 2025-06-01 22:47:07 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:07.287139 | orchestrator | 2025-06-01 22:47:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:10.349608 | orchestrator | 2025-06-01 22:47:10 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:47:10.352007 | orchestrator | 2025-06-01 22:47:10 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:10.352082 | orchestrator | 2025-06-01 22:47:10 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:10.352095 | orchestrator | 2025-06-01 22:47:10 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state STARTED 2025-06-01 22:47:10.352107 | orchestrator | 2025-06-01 22:47:10 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:10.352118 | orchestrator | 2025-06-01 22:47:10 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:10.352130 | orchestrator | 2025-06-01 22:47:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:13.390691 | orchestrator | 2025-06-01 22:47:13 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:47:13.390867 | orchestrator | 2025-06-01 22:47:13 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:13.390882 | orchestrator | 2025-06-01 22:47:13 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:13.390895 | orchestrator | 2025-06-01 22:47:13 | INFO  | Task 28f73dd6-ea38-413a-92a4-6551009f1943 is in state SUCCESS 2025-06-01 22:47:13.392724 | orchestrator | 2025-06-01 22:47:13 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:13.392987 | orchestrator | 2025-06-01 22:47:13 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:13.393009 | orchestrator | 2025-06-01 22:47:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:16.468352 | orchestrator | 2025-06-01 22:47:16 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:47:16.468484 | orchestrator | 2025-06-01 22:47:16 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:16.470130 | orchestrator | 2025-06-01 22:47:16 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:16.474127 | orchestrator | 2025-06-01 22:47:16 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:16.474156 | orchestrator | 2025-06-01 22:47:16 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:16.474169 | orchestrator | 2025-06-01 22:47:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:19.512570 | orchestrator | 2025-06-01 22:47:19 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:47:19.512843 | orchestrator | 2025-06-01 22:47:19 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:19.513974 | orchestrator | 2025-06-01 22:47:19 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:19.514288 | orchestrator | 2025-06-01 22:47:19 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:19.515478 | orchestrator | 2025-06-01 22:47:19 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:19.515502 | orchestrator | 2025-06-01 22:47:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:22.548155 | orchestrator | 2025-06-01 22:47:22 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state STARTED 2025-06-01 22:47:22.550147 | orchestrator | 2025-06-01 22:47:22 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:22.551195 | orchestrator | 2025-06-01 22:47:22 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:22.553062 | orchestrator | 2025-06-01 22:47:22 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:22.555396 | orchestrator | 2025-06-01 22:47:22 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:22.555439 | orchestrator | 2025-06-01 22:47:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:25.596452 | orchestrator | 2025-06-01 22:47:25.596592 | orchestrator | 2025-06-01 22:47:25.596610 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-01 22:47:25.596623 | orchestrator | 2025-06-01 22:47:25.596636 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-01 22:47:25.596649 | orchestrator | Sunday 01 June 2025 22:46:23 +0000 (0:00:00.868) 0:00:00.868 *********** 2025-06-01 22:47:25.596661 | orchestrator | ok: [testbed-manager] => { 2025-06-01 22:47:25.596675 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-01 22:47:25.596690 | orchestrator | } 2025-06-01 22:47:25.596759 | orchestrator | 2025-06-01 22:47:25.596771 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-01 22:47:25.596783 | orchestrator | Sunday 01 June 2025 22:46:23 +0000 (0:00:00.553) 0:00:01.422 *********** 2025-06-01 22:47:25.596796 | orchestrator | ok: [testbed-manager] 2025-06-01 22:47:25.596808 | orchestrator | 2025-06-01 22:47:25.596820 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-01 22:47:25.596831 | orchestrator | Sunday 01 June 2025 22:46:25 +0000 (0:00:01.560) 0:00:02.982 *********** 2025-06-01 22:47:25.596843 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-01 22:47:25.596854 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-01 22:47:25.596866 | orchestrator | 2025-06-01 22:47:25.596877 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-01 22:47:25.596888 | orchestrator | Sunday 01 June 2025 22:46:26 +0000 (0:00:01.330) 0:00:04.313 *********** 2025-06-01 22:47:25.596899 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.596910 | orchestrator | 2025-06-01 22:47:25.596921 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-01 22:47:25.596933 | orchestrator | Sunday 01 June 2025 22:46:29 +0000 (0:00:02.241) 0:00:06.555 *********** 2025-06-01 22:47:25.596986 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.596999 | orchestrator | 2025-06-01 22:47:25.597011 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-01 22:47:25.597024 | orchestrator | Sunday 01 June 2025 22:46:30 +0000 (0:00:01.818) 0:00:08.373 *********** 2025-06-01 22:47:25.597037 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-01 22:47:25.597049 | orchestrator | ok: [testbed-manager] 2025-06-01 22:47:25.597063 | orchestrator | 2025-06-01 22:47:25.597075 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-01 22:47:25.597087 | orchestrator | Sunday 01 June 2025 22:46:55 +0000 (0:00:24.516) 0:00:32.889 *********** 2025-06-01 22:47:25.597098 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.597109 | orchestrator | 2025-06-01 22:47:25.597120 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:47:25.597132 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:47:25.597145 | orchestrator | 2025-06-01 22:47:25.597156 | orchestrator | 2025-06-01 22:47:25.597195 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:47:25.597207 | orchestrator | Sunday 01 June 2025 22:46:57 +0000 (0:00:01.839) 0:00:34.729 *********** 2025-06-01 22:47:25.597218 | orchestrator | =============================================================================== 2025-06-01 22:47:25.597229 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.52s 2025-06-01 22:47:25.597240 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.24s 2025-06-01 22:47:25.597278 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.84s 2025-06-01 22:47:25.597289 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.82s 2025-06-01 22:47:25.597300 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.56s 2025-06-01 22:47:25.597311 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.33s 2025-06-01 22:47:25.597322 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.55s 2025-06-01 22:47:25.597333 | orchestrator | 2025-06-01 22:47:25.597343 | orchestrator | 2025-06-01 22:47:25.597354 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-01 22:47:25.597365 | orchestrator | 2025-06-01 22:47:25.597376 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-01 22:47:25.597386 | orchestrator | Sunday 01 June 2025 22:46:22 +0000 (0:00:00.674) 0:00:00.674 *********** 2025-06-01 22:47:25.597398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-01 22:47:25.597411 | orchestrator | 2025-06-01 22:47:25.597422 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-01 22:47:25.597432 | orchestrator | Sunday 01 June 2025 22:46:23 +0000 (0:00:00.867) 0:00:01.541 *********** 2025-06-01 22:47:25.597460 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-01 22:47:25.597471 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-01 22:47:25.597482 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-01 22:47:25.597494 | orchestrator | 2025-06-01 22:47:25.597505 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-01 22:47:25.597516 | orchestrator | Sunday 01 June 2025 22:46:25 +0000 (0:00:02.057) 0:00:03.599 *********** 2025-06-01 22:47:25.597527 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.597538 | orchestrator | 2025-06-01 22:47:25.597549 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-01 22:47:25.597560 | orchestrator | Sunday 01 June 2025 22:46:26 +0000 (0:00:01.487) 0:00:05.086 *********** 2025-06-01 22:47:25.597590 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-01 22:47:25.597602 | orchestrator | ok: [testbed-manager] 2025-06-01 22:47:25.597613 | orchestrator | 2025-06-01 22:47:25.597624 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-01 22:47:25.597635 | orchestrator | Sunday 01 June 2025 22:47:05 +0000 (0:00:38.294) 0:00:43.381 *********** 2025-06-01 22:47:25.597646 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.597657 | orchestrator | 2025-06-01 22:47:25.597667 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-01 22:47:25.597678 | orchestrator | Sunday 01 June 2025 22:47:06 +0000 (0:00:01.306) 0:00:44.688 *********** 2025-06-01 22:47:25.597689 | orchestrator | ok: [testbed-manager] 2025-06-01 22:47:25.597723 | orchestrator | 2025-06-01 22:47:25.597734 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-01 22:47:25.597745 | orchestrator | Sunday 01 June 2025 22:47:07 +0000 (0:00:01.261) 0:00:45.949 *********** 2025-06-01 22:47:25.597755 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.597766 | orchestrator | 2025-06-01 22:47:25.597777 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-01 22:47:25.597788 | orchestrator | Sunday 01 June 2025 22:47:09 +0000 (0:00:02.283) 0:00:48.232 *********** 2025-06-01 22:47:25.597798 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.597809 | orchestrator | 2025-06-01 22:47:25.597820 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-01 22:47:25.597831 | orchestrator | Sunday 01 June 2025 22:47:11 +0000 (0:00:01.187) 0:00:49.419 *********** 2025-06-01 22:47:25.597841 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.597861 | orchestrator | 2025-06-01 22:47:25.597872 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-01 22:47:25.597883 | orchestrator | Sunday 01 June 2025 22:47:11 +0000 (0:00:00.672) 0:00:50.092 *********** 2025-06-01 22:47:25.597894 | orchestrator | ok: [testbed-manager] 2025-06-01 22:47:25.597905 | orchestrator | 2025-06-01 22:47:25.597916 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:47:25.597927 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:47:25.597938 | orchestrator | 2025-06-01 22:47:25.597949 | orchestrator | 2025-06-01 22:47:25.597960 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:47:25.597971 | orchestrator | Sunday 01 June 2025 22:47:12 +0000 (0:00:00.532) 0:00:50.625 *********** 2025-06-01 22:47:25.597982 | orchestrator | =============================================================================== 2025-06-01 22:47:25.597993 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 38.29s 2025-06-01 22:47:25.598003 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.28s 2025-06-01 22:47:25.598014 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.06s 2025-06-01 22:47:25.598101 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.49s 2025-06-01 22:47:25.598112 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.31s 2025-06-01 22:47:25.598173 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.26s 2025-06-01 22:47:25.598185 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.19s 2025-06-01 22:47:25.598196 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.87s 2025-06-01 22:47:25.598207 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.67s 2025-06-01 22:47:25.598218 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.53s 2025-06-01 22:47:25.598229 | orchestrator | 2025-06-01 22:47:25.598247 | orchestrator | 2025-06-01 22:47:25 | INFO  | Task f24f2696-36a7-4eaa-bf2a-08566ad0a469 is in state SUCCESS 2025-06-01 22:47:25.598468 | orchestrator | 2025-06-01 22:47:25.598560 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 22:47:25.598576 | orchestrator | 2025-06-01 22:47:25.598586 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 22:47:25.598596 | orchestrator | Sunday 01 June 2025 22:46:22 +0000 (0:00:00.788) 0:00:00.788 *********** 2025-06-01 22:47:25.598607 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-01 22:47:25.598617 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-01 22:47:25.598627 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-01 22:47:25.598636 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-01 22:47:25.598646 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-01 22:47:25.598655 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-01 22:47:25.598665 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-01 22:47:25.598675 | orchestrator | 2025-06-01 22:47:25.598684 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-01 22:47:25.598734 | orchestrator | 2025-06-01 22:47:25.598747 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-01 22:47:25.598757 | orchestrator | Sunday 01 June 2025 22:46:24 +0000 (0:00:02.444) 0:00:03.232 *********** 2025-06-01 22:47:25.598783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:47:25.598796 | orchestrator | 2025-06-01 22:47:25.598828 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-01 22:47:25.598838 | orchestrator | Sunday 01 June 2025 22:46:27 +0000 (0:00:02.302) 0:00:05.535 *********** 2025-06-01 22:47:25.598848 | orchestrator | ok: [testbed-manager] 2025-06-01 22:47:25.598859 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:47:25.598868 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:47:25.598878 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:47:25.598891 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:47:25.598909 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:47:25.598926 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:47:25.598944 | orchestrator | 2025-06-01 22:47:25.598962 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-01 22:47:25.598979 | orchestrator | Sunday 01 June 2025 22:46:29 +0000 (0:00:02.362) 0:00:07.897 *********** 2025-06-01 22:47:25.598997 | orchestrator | ok: [testbed-manager] 2025-06-01 22:47:25.599010 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:47:25.599020 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:47:25.599031 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:47:25.599042 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:47:25.599053 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:47:25.599064 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:47:25.599075 | orchestrator | 2025-06-01 22:47:25.599087 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-01 22:47:25.599098 | orchestrator | Sunday 01 June 2025 22:46:33 +0000 (0:00:04.153) 0:00:12.051 *********** 2025-06-01 22:47:25.599109 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.599120 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:47:25.599131 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:47:25.599140 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:47:25.599150 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:47:25.599159 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:47:25.599169 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:47:25.599178 | orchestrator | 2025-06-01 22:47:25.599188 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-01 22:47:25.599197 | orchestrator | Sunday 01 June 2025 22:46:36 +0000 (0:00:02.813) 0:00:14.864 *********** 2025-06-01 22:47:25.599207 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.599216 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:47:25.599226 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:47:25.599236 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:47:25.599245 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:47:25.599254 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:47:25.599264 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:47:25.599273 | orchestrator | 2025-06-01 22:47:25.599283 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-01 22:47:25.599292 | orchestrator | Sunday 01 June 2025 22:46:46 +0000 (0:00:10.093) 0:00:24.958 *********** 2025-06-01 22:47:25.599301 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.599311 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:47:25.599320 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:47:25.599330 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:47:25.599340 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:47:25.599349 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:47:25.599359 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:47:25.599368 | orchestrator | 2025-06-01 22:47:25.599378 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-01 22:47:25.599388 | orchestrator | Sunday 01 June 2025 22:47:02 +0000 (0:00:16.064) 0:00:41.022 *********** 2025-06-01 22:47:25.599399 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:47:25.599411 | orchestrator | 2025-06-01 22:47:25.599420 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-01 22:47:25.599438 | orchestrator | Sunday 01 June 2025 22:47:03 +0000 (0:00:01.136) 0:00:42.159 *********** 2025-06-01 22:47:25.599448 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-01 22:47:25.599458 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-01 22:47:25.599468 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-01 22:47:25.599478 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-01 22:47:25.599504 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-01 22:47:25.599514 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-01 22:47:25.599523 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-01 22:47:25.599533 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-01 22:47:25.599542 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-01 22:47:25.599552 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-01 22:47:25.599562 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-01 22:47:25.599571 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-01 22:47:25.599580 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-01 22:47:25.599590 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-01 22:47:25.599599 | orchestrator | 2025-06-01 22:47:25.599655 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-01 22:47:25.599667 | orchestrator | Sunday 01 June 2025 22:47:09 +0000 (0:00:05.712) 0:00:47.871 *********** 2025-06-01 22:47:25.599677 | orchestrator | ok: [testbed-manager] 2025-06-01 22:47:25.599686 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:47:25.599726 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:47:25.599738 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:47:25.599747 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:47:25.599757 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:47:25.599766 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:47:25.599775 | orchestrator | 2025-06-01 22:47:25.599785 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-01 22:47:25.599795 | orchestrator | Sunday 01 June 2025 22:47:11 +0000 (0:00:01.672) 0:00:49.544 *********** 2025-06-01 22:47:25.599805 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.599815 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:47:25.599824 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:47:25.599834 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:47:25.599855 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:47:25.599864 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:47:25.599874 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:47:25.599883 | orchestrator | 2025-06-01 22:47:25.599893 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-01 22:47:25.599903 | orchestrator | Sunday 01 June 2025 22:47:12 +0000 (0:00:01.732) 0:00:51.276 *********** 2025-06-01 22:47:25.599912 | orchestrator | ok: [testbed-manager] 2025-06-01 22:47:25.599922 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:47:25.599939 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:47:25.599955 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:47:25.599971 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:47:25.599989 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:47:25.600006 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:47:25.600022 | orchestrator | 2025-06-01 22:47:25.600037 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-01 22:47:25.600048 | orchestrator | Sunday 01 June 2025 22:47:14 +0000 (0:00:01.433) 0:00:52.710 *********** 2025-06-01 22:47:25.600057 | orchestrator | ok: [testbed-manager] 2025-06-01 22:47:25.600066 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:47:25.600076 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:47:25.600085 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:47:25.600095 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:47:25.600104 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:47:25.600114 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:47:25.600133 | orchestrator | 2025-06-01 22:47:25.600144 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-01 22:47:25.600153 | orchestrator | Sunday 01 June 2025 22:47:16 +0000 (0:00:02.313) 0:00:55.024 *********** 2025-06-01 22:47:25.600163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-01 22:47:25.600176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:47:25.600186 | orchestrator | 2025-06-01 22:47:25.600195 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-01 22:47:25.600205 | orchestrator | Sunday 01 June 2025 22:47:17 +0000 (0:00:01.348) 0:00:56.373 *********** 2025-06-01 22:47:25.600214 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.600224 | orchestrator | 2025-06-01 22:47:25.600233 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-01 22:47:25.600243 | orchestrator | Sunday 01 June 2025 22:47:19 +0000 (0:00:01.582) 0:00:57.955 *********** 2025-06-01 22:47:25.600252 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:47:25.600262 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:47:25.600271 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:47:25.600281 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:47:25.600290 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:47:25.600300 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:47:25.600309 | orchestrator | changed: [testbed-manager] 2025-06-01 22:47:25.600319 | orchestrator | 2025-06-01 22:47:25.600328 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:47:25.600338 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:47:25.600349 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:47:25.600359 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:47:25.600369 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:47:25.600387 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:47:25.600397 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:47:25.600407 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:47:25.600416 | orchestrator | 2025-06-01 22:47:25.600426 | orchestrator | 2025-06-01 22:47:25.600436 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:47:25.600446 | orchestrator | Sunday 01 June 2025 22:47:22 +0000 (0:00:03.412) 0:01:01.367 *********** 2025-06-01 22:47:25.600455 | orchestrator | =============================================================================== 2025-06-01 22:47:25.600465 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.06s 2025-06-01 22:47:25.600475 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.09s 2025-06-01 22:47:25.600490 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.71s 2025-06-01 22:47:25.600500 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.16s 2025-06-01 22:47:25.600510 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.41s 2025-06-01 22:47:25.600519 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.81s 2025-06-01 22:47:25.600536 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.44s 2025-06-01 22:47:25.600546 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.36s 2025-06-01 22:47:25.600555 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.31s 2025-06-01 22:47:25.600565 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.30s 2025-06-01 22:47:25.600574 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.73s 2025-06-01 22:47:25.600590 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.67s 2025-06-01 22:47:25.600600 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.58s 2025-06-01 22:47:25.600610 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.43s 2025-06-01 22:47:25.600620 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.35s 2025-06-01 22:47:25.600629 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.14s 2025-06-01 22:47:25.600668 | orchestrator | 2025-06-01 22:47:25 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:25.600795 | orchestrator | 2025-06-01 22:47:25 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:25.600811 | orchestrator | 2025-06-01 22:47:25 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:25.602169 | orchestrator | 2025-06-01 22:47:25 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:25.602193 | orchestrator | 2025-06-01 22:47:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:28.645689 | orchestrator | 2025-06-01 22:47:28 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:28.648298 | orchestrator | 2025-06-01 22:47:28 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:28.648919 | orchestrator | 2025-06-01 22:47:28 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:28.651216 | orchestrator | 2025-06-01 22:47:28 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:28.651257 | orchestrator | 2025-06-01 22:47:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:31.688776 | orchestrator | 2025-06-01 22:47:31 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:31.689596 | orchestrator | 2025-06-01 22:47:31 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:31.691088 | orchestrator | 2025-06-01 22:47:31 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:31.691976 | orchestrator | 2025-06-01 22:47:31 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:31.692047 | orchestrator | 2025-06-01 22:47:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:34.731128 | orchestrator | 2025-06-01 22:47:34 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:34.732784 | orchestrator | 2025-06-01 22:47:34 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:34.735394 | orchestrator | 2025-06-01 22:47:34 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:34.738736 | orchestrator | 2025-06-01 22:47:34 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:34.738889 | orchestrator | 2025-06-01 22:47:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:37.789746 | orchestrator | 2025-06-01 22:47:37 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:37.796099 | orchestrator | 2025-06-01 22:47:37 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:37.798289 | orchestrator | 2025-06-01 22:47:37 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:37.799952 | orchestrator | 2025-06-01 22:47:37 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:37.799973 | orchestrator | 2025-06-01 22:47:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:40.848670 | orchestrator | 2025-06-01 22:47:40 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:40.849487 | orchestrator | 2025-06-01 22:47:40 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:40.851220 | orchestrator | 2025-06-01 22:47:40 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:40.852240 | orchestrator | 2025-06-01 22:47:40 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:40.852265 | orchestrator | 2025-06-01 22:47:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:43.926582 | orchestrator | 2025-06-01 22:47:43 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:43.926760 | orchestrator | 2025-06-01 22:47:43 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:43.929544 | orchestrator | 2025-06-01 22:47:43 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:43.930830 | orchestrator | 2025-06-01 22:47:43 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:43.930857 | orchestrator | 2025-06-01 22:47:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:46.989110 | orchestrator | 2025-06-01 22:47:46 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:46.990468 | orchestrator | 2025-06-01 22:47:46 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:46.993097 | orchestrator | 2025-06-01 22:47:46 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:46.994590 | orchestrator | 2025-06-01 22:47:46 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:46.994884 | orchestrator | 2025-06-01 22:47:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:50.054478 | orchestrator | 2025-06-01 22:47:50 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:50.058437 | orchestrator | 2025-06-01 22:47:50 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:50.062343 | orchestrator | 2025-06-01 22:47:50 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:50.066250 | orchestrator | 2025-06-01 22:47:50 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state STARTED 2025-06-01 22:47:50.066728 | orchestrator | 2025-06-01 22:47:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:53.116755 | orchestrator | 2025-06-01 22:47:53 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:53.116863 | orchestrator | 2025-06-01 22:47:53 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:53.118781 | orchestrator | 2025-06-01 22:47:53 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:53.119060 | orchestrator | 2025-06-01 22:47:53 | INFO  | Task 0c8648ab-09a2-4552-bad8-92b532e93e9c is in state SUCCESS 2025-06-01 22:47:53.119708 | orchestrator | 2025-06-01 22:47:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:56.190062 | orchestrator | 2025-06-01 22:47:56 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:56.192047 | orchestrator | 2025-06-01 22:47:56 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:56.192079 | orchestrator | 2025-06-01 22:47:56 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:56.192093 | orchestrator | 2025-06-01 22:47:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:47:59.236771 | orchestrator | 2025-06-01 22:47:59 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:47:59.238407 | orchestrator | 2025-06-01 22:47:59 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:47:59.243033 | orchestrator | 2025-06-01 22:47:59 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:47:59.243590 | orchestrator | 2025-06-01 22:47:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:02.298891 | orchestrator | 2025-06-01 22:48:02 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:48:02.300217 | orchestrator | 2025-06-01 22:48:02 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:02.302117 | orchestrator | 2025-06-01 22:48:02 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:02.302163 | orchestrator | 2025-06-01 22:48:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:05.347201 | orchestrator | 2025-06-01 22:48:05 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:48:05.348915 | orchestrator | 2025-06-01 22:48:05 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:05.351230 | orchestrator | 2025-06-01 22:48:05 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:05.351269 | orchestrator | 2025-06-01 22:48:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:08.397635 | orchestrator | 2025-06-01 22:48:08 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:48:08.397875 | orchestrator | 2025-06-01 22:48:08 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:08.398405 | orchestrator | 2025-06-01 22:48:08 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:08.398431 | orchestrator | 2025-06-01 22:48:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:11.439100 | orchestrator | 2025-06-01 22:48:11 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:48:11.441733 | orchestrator | 2025-06-01 22:48:11 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:11.444378 | orchestrator | 2025-06-01 22:48:11 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:11.444406 | orchestrator | 2025-06-01 22:48:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:14.493394 | orchestrator | 2025-06-01 22:48:14 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:48:14.495857 | orchestrator | 2025-06-01 22:48:14 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:14.497189 | orchestrator | 2025-06-01 22:48:14 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:14.497255 | orchestrator | 2025-06-01 22:48:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:17.550117 | orchestrator | 2025-06-01 22:48:17 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:48:17.552514 | orchestrator | 2025-06-01 22:48:17 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:17.554603 | orchestrator | 2025-06-01 22:48:17 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:17.555135 | orchestrator | 2025-06-01 22:48:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:20.610341 | orchestrator | 2025-06-01 22:48:20 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:48:20.611622 | orchestrator | 2025-06-01 22:48:20 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:20.613444 | orchestrator | 2025-06-01 22:48:20 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:20.614981 | orchestrator | 2025-06-01 22:48:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:23.664715 | orchestrator | 2025-06-01 22:48:23 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:48:23.666083 | orchestrator | 2025-06-01 22:48:23 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:23.668475 | orchestrator | 2025-06-01 22:48:23 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:23.668506 | orchestrator | 2025-06-01 22:48:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:26.713451 | orchestrator | 2025-06-01 22:48:26 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:48:26.714537 | orchestrator | 2025-06-01 22:48:26 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:26.714577 | orchestrator | 2025-06-01 22:48:26 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:26.714590 | orchestrator | 2025-06-01 22:48:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:29.759305 | orchestrator | 2025-06-01 22:48:29 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:48:29.761290 | orchestrator | 2025-06-01 22:48:29 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:29.762275 | orchestrator | 2025-06-01 22:48:29 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:29.762407 | orchestrator | 2025-06-01 22:48:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:32.806296 | orchestrator | 2025-06-01 22:48:32 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:48:32.807643 | orchestrator | 2025-06-01 22:48:32 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:32.809710 | orchestrator | 2025-06-01 22:48:32 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:32.809738 | orchestrator | 2025-06-01 22:48:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:35.844365 | orchestrator | 2025-06-01 22:48:35 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state STARTED 2025-06-01 22:48:35.849219 | orchestrator | 2025-06-01 22:48:35 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:35.849380 | orchestrator | 2025-06-01 22:48:35 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:35.849410 | orchestrator | 2025-06-01 22:48:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:38.897961 | orchestrator | 2025-06-01 22:48:38 | INFO  | Task d7b39b9d-dd2c-43f8-a425-91bd24d4f1ad is in state SUCCESS 2025-06-01 22:48:38.899796 | orchestrator | 2025-06-01 22:48:38.899844 | orchestrator | 2025-06-01 22:48:38.899858 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-01 22:48:38.899870 | orchestrator | 2025-06-01 22:48:38.899881 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-01 22:48:38.899893 | orchestrator | Sunday 01 June 2025 22:46:44 +0000 (0:00:00.303) 0:00:00.303 *********** 2025-06-01 22:48:38.899905 | orchestrator | ok: [testbed-manager] 2025-06-01 22:48:38.899917 | orchestrator | 2025-06-01 22:48:38.899929 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-01 22:48:38.899940 | orchestrator | Sunday 01 June 2025 22:46:45 +0000 (0:00:01.017) 0:00:01.321 *********** 2025-06-01 22:48:38.899951 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-01 22:48:38.899963 | orchestrator | 2025-06-01 22:48:38.899974 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-01 22:48:38.899985 | orchestrator | Sunday 01 June 2025 22:46:45 +0000 (0:00:00.496) 0:00:01.817 *********** 2025-06-01 22:48:38.899996 | orchestrator | changed: [testbed-manager] 2025-06-01 22:48:38.900007 | orchestrator | 2025-06-01 22:48:38.900017 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-01 22:48:38.900028 | orchestrator | Sunday 01 June 2025 22:46:47 +0000 (0:00:01.547) 0:00:03.364 *********** 2025-06-01 22:48:38.900039 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-01 22:48:38.900050 | orchestrator | ok: [testbed-manager] 2025-06-01 22:48:38.900061 | orchestrator | 2025-06-01 22:48:38.900072 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-01 22:48:38.900083 | orchestrator | Sunday 01 June 2025 22:47:48 +0000 (0:01:01.644) 0:01:05.009 *********** 2025-06-01 22:48:38.900094 | orchestrator | changed: [testbed-manager] 2025-06-01 22:48:38.900105 | orchestrator | 2025-06-01 22:48:38.900116 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:48:38.900127 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:48:38.900140 | orchestrator | 2025-06-01 22:48:38.900151 | orchestrator | 2025-06-01 22:48:38.900162 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:48:38.900173 | orchestrator | Sunday 01 June 2025 22:47:52 +0000 (0:00:03.550) 0:01:08.559 *********** 2025-06-01 22:48:38.900184 | orchestrator | =============================================================================== 2025-06-01 22:48:38.900195 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 61.64s 2025-06-01 22:48:38.900208 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.55s 2025-06-01 22:48:38.900220 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.55s 2025-06-01 22:48:38.900233 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.02s 2025-06-01 22:48:38.900246 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.50s 2025-06-01 22:48:38.900258 | orchestrator | 2025-06-01 22:48:38.900271 | orchestrator | 2025-06-01 22:48:38.900283 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-01 22:48:38.900295 | orchestrator | 2025-06-01 22:48:38.900307 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-01 22:48:38.900320 | orchestrator | Sunday 01 June 2025 22:46:14 +0000 (0:00:00.278) 0:00:00.278 *********** 2025-06-01 22:48:38.900334 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:48:38.900348 | orchestrator | 2025-06-01 22:48:38.900361 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-01 22:48:38.900407 | orchestrator | Sunday 01 June 2025 22:46:16 +0000 (0:00:01.243) 0:00:01.521 *********** 2025-06-01 22:48:38.900450 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 22:48:38.900466 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 22:48:38.900477 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 22:48:38.900488 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 22:48:38.900505 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 22:48:38.900517 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 22:48:38.900527 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 22:48:38.900538 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 22:48:38.900549 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 22:48:38.900559 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 22:48:38.900572 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 22:48:38.900583 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 22:48:38.900593 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 22:48:38.900604 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 22:48:38.900615 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 22:48:38.900626 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 22:48:38.900651 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 22:48:38.900686 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 22:48:38.900698 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 22:48:38.900709 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 22:48:38.900720 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 22:48:38.900731 | orchestrator | 2025-06-01 22:48:38.900741 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-01 22:48:38.900752 | orchestrator | Sunday 01 June 2025 22:46:20 +0000 (0:00:04.176) 0:00:05.698 *********** 2025-06-01 22:48:38.900763 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:48:38.900775 | orchestrator | 2025-06-01 22:48:38.900786 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-01 22:48:38.900796 | orchestrator | Sunday 01 June 2025 22:46:21 +0000 (0:00:01.239) 0:00:06.938 *********** 2025-06-01 22:48:38.900812 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.900828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.900848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.900860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.900872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.900904 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.900922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.900935 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.900947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.900964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.900980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.900992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.901017 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.901029 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.901044 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.901063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.901103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.901118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.901143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.901156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.901167 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.901178 | orchestrator | 2025-06-01 22:48:38.901189 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-01 22:48:38.901206 | orchestrator | Sunday 01 June 2025 22:46:26 +0000 (0:00:04.641) 0:00:11.580 *********** 2025-06-01 22:48:38.901219 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.901231 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901249 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.901273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.901322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901345 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:48:38.901363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.901375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901397 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:48:38.901409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.901424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901448 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:48:38.901464 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:48:38.901475 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:48:38.901487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.901504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901527 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:48:38.901538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.901549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901580 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:48:38.901591 | orchestrator | 2025-06-01 22:48:38.901602 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-01 22:48:38.901613 | orchestrator | Sunday 01 June 2025 22:46:27 +0000 (0:00:01.361) 0:00:12.941 *********** 2025-06-01 22:48:38.901625 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.901642 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901729 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.901757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.901779 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:48:38.901791 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:48:38.901807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.901819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.902788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.902831 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:48:38.902842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.902853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.902864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.902874 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:48:38.902884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.902894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.902906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.902917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.902947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.902958 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:48:38.902969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.902980 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:48:38.902995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 22:48:38.903006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.903017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.903027 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:48:38.903038 | orchestrator | 2025-06-01 22:48:38.903048 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-01 22:48:38.903058 | orchestrator | Sunday 01 June 2025 22:46:29 +0000 (0:00:02.034) 0:00:14.975 *********** 2025-06-01 22:48:38.903068 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:48:38.903078 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:48:38.903088 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:48:38.903098 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:48:38.903108 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:48:38.903121 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:48:38.903132 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:48:38.903142 | orchestrator | 2025-06-01 22:48:38.903158 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-01 22:48:38.903168 | orchestrator | Sunday 01 June 2025 22:46:30 +0000 (0:00:00.874) 0:00:15.849 *********** 2025-06-01 22:48:38.903178 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:48:38.903188 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:48:38.903198 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:48:38.903208 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:48:38.903218 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:48:38.903227 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:48:38.903237 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:48:38.903247 | orchestrator | 2025-06-01 22:48:38.903257 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-01 22:48:38.903267 | orchestrator | Sunday 01 June 2025 22:46:31 +0000 (0:00:01.261) 0:00:17.111 *********** 2025-06-01 22:48:38.903289 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.903300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.903311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.903321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.903331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.903342 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.903390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.903402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903427 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903474 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903553 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.903571 | orchestrator | 2025-06-01 22:48:38.903583 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-01 22:48:38.903595 | orchestrator | Sunday 01 June 2025 22:46:37 +0000 (0:00:05.327) 0:00:22.438 *********** 2025-06-01 22:48:38.903606 | orchestrator | [WARNING]: Skipped 2025-06-01 22:48:38.903619 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-01 22:48:38.903630 | orchestrator | to this access issue: 2025-06-01 22:48:38.903642 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-01 22:48:38.903653 | orchestrator | directory 2025-06-01 22:48:38.903690 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 22:48:38.903702 | orchestrator | 2025-06-01 22:48:38.903713 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-01 22:48:38.903724 | orchestrator | Sunday 01 June 2025 22:46:38 +0000 (0:00:01.273) 0:00:23.711 *********** 2025-06-01 22:48:38.903733 | orchestrator | [WARNING]: Skipped 2025-06-01 22:48:38.903743 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-01 22:48:38.903753 | orchestrator | to this access issue: 2025-06-01 22:48:38.903762 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-01 22:48:38.903772 | orchestrator | directory 2025-06-01 22:48:38.903782 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 22:48:38.903791 | orchestrator | 2025-06-01 22:48:38.903801 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-01 22:48:38.903810 | orchestrator | Sunday 01 June 2025 22:46:39 +0000 (0:00:01.534) 0:00:25.246 *********** 2025-06-01 22:48:38.903820 | orchestrator | [WARNING]: Skipped 2025-06-01 22:48:38.903830 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-01 22:48:38.903839 | orchestrator | to this access issue: 2025-06-01 22:48:38.903849 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-01 22:48:38.903858 | orchestrator | directory 2025-06-01 22:48:38.903868 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 22:48:38.903878 | orchestrator | 2025-06-01 22:48:38.903892 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-01 22:48:38.903902 | orchestrator | Sunday 01 June 2025 22:46:41 +0000 (0:00:01.247) 0:00:26.494 *********** 2025-06-01 22:48:38.903912 | orchestrator | [WARNING]: Skipped 2025-06-01 22:48:38.903922 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-01 22:48:38.903932 | orchestrator | to this access issue: 2025-06-01 22:48:38.903941 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-01 22:48:38.903951 | orchestrator | directory 2025-06-01 22:48:38.903961 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 22:48:38.903971 | orchestrator | 2025-06-01 22:48:38.903980 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-01 22:48:38.903990 | orchestrator | Sunday 01 June 2025 22:46:42 +0000 (0:00:01.248) 0:00:27.742 *********** 2025-06-01 22:48:38.904000 | orchestrator | changed: [testbed-manager] 2025-06-01 22:48:38.904009 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:48:38.904019 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:48:38.904029 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:48:38.904038 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:48:38.904048 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:48:38.904057 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:48:38.904072 | orchestrator | 2025-06-01 22:48:38.904082 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-01 22:48:38.904092 | orchestrator | Sunday 01 June 2025 22:46:46 +0000 (0:00:03.708) 0:00:31.451 *********** 2025-06-01 22:48:38.904102 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 22:48:38.904112 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 22:48:38.904122 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 22:48:38.904132 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 22:48:38.904141 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 22:48:38.904151 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 22:48:38.904160 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 22:48:38.904170 | orchestrator | 2025-06-01 22:48:38.904179 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-01 22:48:38.904189 | orchestrator | Sunday 01 June 2025 22:46:49 +0000 (0:00:03.714) 0:00:35.165 *********** 2025-06-01 22:48:38.904199 | orchestrator | changed: [testbed-manager] 2025-06-01 22:48:38.904209 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:48:38.904218 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:48:38.904228 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:48:38.904237 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:48:38.904247 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:48:38.904256 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:48:38.904266 | orchestrator | 2025-06-01 22:48:38.904276 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-01 22:48:38.904285 | orchestrator | Sunday 01 June 2025 22:46:52 +0000 (0:00:02.406) 0:00:37.572 *********** 2025-06-01 22:48:38.904295 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904309 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.904320 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.904353 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.904374 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904385 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.904409 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904430 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904446 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.904467 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904477 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904488 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.904515 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:48:38.904546 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904556 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904566 | orchestrator | 2025-06-01 22:48:38.904576 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-01 22:48:38.904585 | orchestrator | Sunday 01 June 2025 22:46:54 +0000 (0:00:02.458) 0:00:40.030 *********** 2025-06-01 22:48:38.904595 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 22:48:38.904605 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 22:48:38.904615 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 22:48:38.904624 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 22:48:38.904634 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 22:48:38.904643 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 22:48:38.904653 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 22:48:38.904679 | orchestrator | 2025-06-01 22:48:38.904690 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-01 22:48:38.904699 | orchestrator | Sunday 01 June 2025 22:46:58 +0000 (0:00:03.539) 0:00:43.570 *********** 2025-06-01 22:48:38.904709 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 22:48:38.904718 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 22:48:38.904728 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 22:48:38.904737 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 22:48:38.904747 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 22:48:38.904756 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 22:48:38.904766 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 22:48:38.904775 | orchestrator | 2025-06-01 22:48:38.904785 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-01 22:48:38.904794 | orchestrator | Sunday 01 June 2025 22:47:00 +0000 (0:00:02.066) 0:00:45.637 *********** 2025-06-01 22:48:38.904810 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904857 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904882 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904926 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904946 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 22:48:38.904956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904967 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.904995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.905005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.905021 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.905031 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.905041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.905051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:48:38.905061 | orchestrator | 2025-06-01 22:48:38.905071 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-01 22:48:38.905081 | orchestrator | Sunday 01 June 2025 22:47:03 +0000 (0:00:03.625) 0:00:49.262 *********** 2025-06-01 22:48:38.905091 | orchestrator | changed: [testbed-manager] 2025-06-01 22:48:38.905100 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:48:38.905110 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:48:38.905120 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:48:38.905134 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:48:38.905143 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:48:38.905153 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:48:38.905162 | orchestrator | 2025-06-01 22:48:38.905172 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-01 22:48:38.905181 | orchestrator | Sunday 01 June 2025 22:47:06 +0000 (0:00:02.091) 0:00:51.353 *********** 2025-06-01 22:48:38.905191 | orchestrator | changed: [testbed-manager] 2025-06-01 22:48:38.905201 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:48:38.905210 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:48:38.905219 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:48:38.905229 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:48:38.905238 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:48:38.905248 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:48:38.905257 | orchestrator | 2025-06-01 22:48:38.905267 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 22:48:38.905277 | orchestrator | Sunday 01 June 2025 22:47:07 +0000 (0:00:01.410) 0:00:52.764 *********** 2025-06-01 22:48:38.905286 | orchestrator | 2025-06-01 22:48:38.905296 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 22:48:38.905309 | orchestrator | Sunday 01 June 2025 22:47:07 +0000 (0:00:00.109) 0:00:52.874 *********** 2025-06-01 22:48:38.905319 | orchestrator | 2025-06-01 22:48:38.905329 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 22:48:38.905338 | orchestrator | Sunday 01 June 2025 22:47:07 +0000 (0:00:00.099) 0:00:52.973 *********** 2025-06-01 22:48:38.905348 | orchestrator | 2025-06-01 22:48:38.905357 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 22:48:38.905367 | orchestrator | Sunday 01 June 2025 22:47:07 +0000 (0:00:00.098) 0:00:53.071 *********** 2025-06-01 22:48:38.905376 | orchestrator | 2025-06-01 22:48:38.905386 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 22:48:38.905395 | orchestrator | Sunday 01 June 2025 22:47:07 +0000 (0:00:00.099) 0:00:53.170 *********** 2025-06-01 22:48:38.905405 | orchestrator | 2025-06-01 22:48:38.905414 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 22:48:38.905424 | orchestrator | Sunday 01 June 2025 22:47:08 +0000 (0:00:00.236) 0:00:53.407 *********** 2025-06-01 22:48:38.905434 | orchestrator | 2025-06-01 22:48:38.905443 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 22:48:38.905453 | orchestrator | Sunday 01 June 2025 22:47:08 +0000 (0:00:00.077) 0:00:53.485 *********** 2025-06-01 22:48:38.905462 | orchestrator | 2025-06-01 22:48:38.905472 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-01 22:48:38.905482 | orchestrator | Sunday 01 June 2025 22:47:08 +0000 (0:00:00.098) 0:00:53.583 *********** 2025-06-01 22:48:38.905495 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:48:38.905505 | orchestrator | changed: [testbed-manager] 2025-06-01 22:48:38.905515 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:48:38.905525 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:48:38.905534 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:48:38.905544 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:48:38.905553 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:48:38.905563 | orchestrator | 2025-06-01 22:48:38.905572 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-01 22:48:38.905582 | orchestrator | Sunday 01 June 2025 22:47:43 +0000 (0:00:35.614) 0:01:29.197 *********** 2025-06-01 22:48:38.905592 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:48:38.905601 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:48:38.905611 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:48:38.905620 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:48:38.905629 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:48:38.905639 | orchestrator | changed: [testbed-manager] 2025-06-01 22:48:38.905648 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:48:38.905688 | orchestrator | 2025-06-01 22:48:38.905698 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-01 22:48:38.905708 | orchestrator | Sunday 01 June 2025 22:48:24 +0000 (0:00:40.300) 0:02:09.498 *********** 2025-06-01 22:48:38.905718 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:48:38.905727 | orchestrator | ok: [testbed-manager] 2025-06-01 22:48:38.905737 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:48:38.905747 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:48:38.905756 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:48:38.905766 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:48:38.905776 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:48:38.905785 | orchestrator | 2025-06-01 22:48:38.905795 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-01 22:48:38.905804 | orchestrator | Sunday 01 June 2025 22:48:26 +0000 (0:00:02.394) 0:02:11.892 *********** 2025-06-01 22:48:38.905814 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:48:38.905824 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:48:38.905833 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:48:38.905843 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:48:38.905852 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:48:38.905862 | orchestrator | changed: [testbed-manager] 2025-06-01 22:48:38.905871 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:48:38.905881 | orchestrator | 2025-06-01 22:48:38.905890 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:48:38.905901 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 22:48:38.905911 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 22:48:38.905921 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 22:48:38.905931 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 22:48:38.905940 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 22:48:38.905950 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 22:48:38.905959 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 22:48:38.905969 | orchestrator | 2025-06-01 22:48:38.905979 | orchestrator | 2025-06-01 22:48:38.906091 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:48:38.906102 | orchestrator | Sunday 01 June 2025 22:48:35 +0000 (0:00:08.969) 0:02:20.862 *********** 2025-06-01 22:48:38.906112 | orchestrator | =============================================================================== 2025-06-01 22:48:38.906122 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 40.30s 2025-06-01 22:48:38.906136 | orchestrator | common : Restart fluentd container ------------------------------------- 35.61s 2025-06-01 22:48:38.906146 | orchestrator | common : Restart cron container ----------------------------------------- 8.97s 2025-06-01 22:48:38.906155 | orchestrator | common : Copying over config.json files for services -------------------- 5.33s 2025-06-01 22:48:38.906165 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.64s 2025-06-01 22:48:38.906174 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.18s 2025-06-01 22:48:38.906183 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.71s 2025-06-01 22:48:38.906193 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.71s 2025-06-01 22:48:38.906209 | orchestrator | common : Check common containers ---------------------------------------- 3.63s 2025-06-01 22:48:38.906219 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.54s 2025-06-01 22:48:38.906228 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.46s 2025-06-01 22:48:38.906238 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.41s 2025-06-01 22:48:38.906247 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.39s 2025-06-01 22:48:38.906256 | orchestrator | common : Creating log volume -------------------------------------------- 2.09s 2025-06-01 22:48:38.906273 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.07s 2025-06-01 22:48:38.906283 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.03s 2025-06-01 22:48:38.906292 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.53s 2025-06-01 22:48:38.906302 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.41s 2025-06-01 22:48:38.906311 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.36s 2025-06-01 22:48:38.906321 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.27s 2025-06-01 22:48:38.906330 | orchestrator | 2025-06-01 22:48:38 | INFO  | Task c8725aae-7726-441a-a1ef-b5d71c2bce54 is in state STARTED 2025-06-01 22:48:38.906340 | orchestrator | 2025-06-01 22:48:38 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state STARTED 2025-06-01 22:48:38.906350 | orchestrator | 2025-06-01 22:48:38 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:48:38.906468 | orchestrator | 2025-06-01 22:48:38 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:38.906484 | orchestrator | 2025-06-01 22:48:38 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:48:38.906494 | orchestrator | 2025-06-01 22:48:38 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:38.907075 | orchestrator | 2025-06-01 22:48:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:41.940819 | orchestrator | 2025-06-01 22:48:41 | INFO  | Task c8725aae-7726-441a-a1ef-b5d71c2bce54 is in state STARTED 2025-06-01 22:48:41.940950 | orchestrator | 2025-06-01 22:48:41 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state STARTED 2025-06-01 22:48:41.941241 | orchestrator | 2025-06-01 22:48:41 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:48:41.942208 | orchestrator | 2025-06-01 22:48:41 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:41.942905 | orchestrator | 2025-06-01 22:48:41 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:48:41.943950 | orchestrator | 2025-06-01 22:48:41 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:41.943975 | orchestrator | 2025-06-01 22:48:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:44.974968 | orchestrator | 2025-06-01 22:48:44 | INFO  | Task c8725aae-7726-441a-a1ef-b5d71c2bce54 is in state STARTED 2025-06-01 22:48:44.975071 | orchestrator | 2025-06-01 22:48:44 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state STARTED 2025-06-01 22:48:44.975087 | orchestrator | 2025-06-01 22:48:44 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:48:44.975099 | orchestrator | 2025-06-01 22:48:44 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:44.975110 | orchestrator | 2025-06-01 22:48:44 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:48:44.975229 | orchestrator | 2025-06-01 22:48:44 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:44.976249 | orchestrator | 2025-06-01 22:48:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:48.010762 | orchestrator | 2025-06-01 22:48:48 | INFO  | Task c8725aae-7726-441a-a1ef-b5d71c2bce54 is in state STARTED 2025-06-01 22:48:48.011493 | orchestrator | 2025-06-01 22:48:48 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state STARTED 2025-06-01 22:48:48.012556 | orchestrator | 2025-06-01 22:48:48 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:48:48.013575 | orchestrator | 2025-06-01 22:48:48 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:48.014783 | orchestrator | 2025-06-01 22:48:48 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:48:48.016015 | orchestrator | 2025-06-01 22:48:48 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:48.017861 | orchestrator | 2025-06-01 22:48:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:51.057392 | orchestrator | 2025-06-01 22:48:51 | INFO  | Task c8725aae-7726-441a-a1ef-b5d71c2bce54 is in state STARTED 2025-06-01 22:48:51.057506 | orchestrator | 2025-06-01 22:48:51 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state STARTED 2025-06-01 22:48:51.057522 | orchestrator | 2025-06-01 22:48:51 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:48:51.058270 | orchestrator | 2025-06-01 22:48:51 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:51.059230 | orchestrator | 2025-06-01 22:48:51 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:48:51.061548 | orchestrator | 2025-06-01 22:48:51 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:51.062840 | orchestrator | 2025-06-01 22:48:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:54.115994 | orchestrator | 2025-06-01 22:48:54 | INFO  | Task c8725aae-7726-441a-a1ef-b5d71c2bce54 is in state STARTED 2025-06-01 22:48:54.116103 | orchestrator | 2025-06-01 22:48:54 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state STARTED 2025-06-01 22:48:54.116118 | orchestrator | 2025-06-01 22:48:54 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:48:54.116131 | orchestrator | 2025-06-01 22:48:54 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:54.120356 | orchestrator | 2025-06-01 22:48:54 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:48:54.124508 | orchestrator | 2025-06-01 22:48:54 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:54.124539 | orchestrator | 2025-06-01 22:48:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:48:57.157019 | orchestrator | 2025-06-01 22:48:57 | INFO  | Task c8725aae-7726-441a-a1ef-b5d71c2bce54 is in state SUCCESS 2025-06-01 22:48:57.158845 | orchestrator | 2025-06-01 22:48:57 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state STARTED 2025-06-01 22:48:57.160179 | orchestrator | 2025-06-01 22:48:57 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:48:57.160868 | orchestrator | 2025-06-01 22:48:57 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:48:57.164367 | orchestrator | 2025-06-01 22:48:57 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:48:57.164860 | orchestrator | 2025-06-01 22:48:57 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:48:57.167078 | orchestrator | 2025-06-01 22:48:57 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:48:57.167101 | orchestrator | 2025-06-01 22:48:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:00.198817 | orchestrator | 2025-06-01 22:49:00 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state STARTED 2025-06-01 22:49:00.199309 | orchestrator | 2025-06-01 22:49:00 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:00.208342 | orchestrator | 2025-06-01 22:49:00 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:00.213040 | orchestrator | 2025-06-01 22:49:00 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:00.213062 | orchestrator | 2025-06-01 22:49:00 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:00.213590 | orchestrator | 2025-06-01 22:49:00 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:00.213811 | orchestrator | 2025-06-01 22:49:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:03.259348 | orchestrator | 2025-06-01 22:49:03 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state STARTED 2025-06-01 22:49:03.260717 | orchestrator | 2025-06-01 22:49:03 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:03.262923 | orchestrator | 2025-06-01 22:49:03 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:03.262949 | orchestrator | 2025-06-01 22:49:03 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:03.262962 | orchestrator | 2025-06-01 22:49:03 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:03.267415 | orchestrator | 2025-06-01 22:49:03 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:03.267499 | orchestrator | 2025-06-01 22:49:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:06.316818 | orchestrator | 2025-06-01 22:49:06 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state STARTED 2025-06-01 22:49:06.317030 | orchestrator | 2025-06-01 22:49:06 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:06.317501 | orchestrator | 2025-06-01 22:49:06 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:06.317976 | orchestrator | 2025-06-01 22:49:06 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:06.318671 | orchestrator | 2025-06-01 22:49:06 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:06.319192 | orchestrator | 2025-06-01 22:49:06 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:06.319226 | orchestrator | 2025-06-01 22:49:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:09.366700 | orchestrator | 2025-06-01 22:49:09 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state STARTED 2025-06-01 22:49:09.366846 | orchestrator | 2025-06-01 22:49:09 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:09.367592 | orchestrator | 2025-06-01 22:49:09 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:09.369551 | orchestrator | 2025-06-01 22:49:09 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:09.370885 | orchestrator | 2025-06-01 22:49:09 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:09.372126 | orchestrator | 2025-06-01 22:49:09 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:09.372347 | orchestrator | 2025-06-01 22:49:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:12.426350 | orchestrator | 2025-06-01 22:49:12 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state STARTED 2025-06-01 22:49:12.426496 | orchestrator | 2025-06-01 22:49:12 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:12.427304 | orchestrator | 2025-06-01 22:49:12 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:12.428256 | orchestrator | 2025-06-01 22:49:12 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:12.430854 | orchestrator | 2025-06-01 22:49:12 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:12.430875 | orchestrator | 2025-06-01 22:49:12 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:12.430932 | orchestrator | 2025-06-01 22:49:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:15.468169 | orchestrator | 2025-06-01 22:49:15 | INFO  | Task a92bde46-57c6-4ded-9e4b-0d62e3fe901c is in state SUCCESS 2025-06-01 22:49:15.468415 | orchestrator | 2025-06-01 22:49:15.468438 | orchestrator | 2025-06-01 22:49:15.468452 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 22:49:15.468464 | orchestrator | 2025-06-01 22:49:15.468475 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 22:49:15.468487 | orchestrator | Sunday 01 June 2025 22:48:43 +0000 (0:00:00.549) 0:00:00.549 *********** 2025-06-01 22:49:15.468499 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:49:15.468512 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:49:15.468523 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:49:15.468534 | orchestrator | 2025-06-01 22:49:15.468546 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 22:49:15.468557 | orchestrator | Sunday 01 June 2025 22:48:43 +0000 (0:00:00.521) 0:00:01.071 *********** 2025-06-01 22:49:15.468569 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-01 22:49:15.468581 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-01 22:49:15.468592 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-01 22:49:15.468603 | orchestrator | 2025-06-01 22:49:15.468641 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-01 22:49:15.468678 | orchestrator | 2025-06-01 22:49:15.468689 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-01 22:49:15.468700 | orchestrator | Sunday 01 June 2025 22:48:44 +0000 (0:00:00.538) 0:00:01.609 *********** 2025-06-01 22:49:15.468711 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:49:15.468724 | orchestrator | 2025-06-01 22:49:15.468735 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-01 22:49:15.468746 | orchestrator | Sunday 01 June 2025 22:48:44 +0000 (0:00:00.473) 0:00:02.083 *********** 2025-06-01 22:49:15.468758 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-01 22:49:15.468769 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-01 22:49:15.468780 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-01 22:49:15.468792 | orchestrator | 2025-06-01 22:49:15.468803 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-01 22:49:15.468814 | orchestrator | Sunday 01 June 2025 22:48:45 +0000 (0:00:00.812) 0:00:02.896 *********** 2025-06-01 22:49:15.468825 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-01 22:49:15.468863 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-01 22:49:15.468874 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-01 22:49:15.468885 | orchestrator | 2025-06-01 22:49:15.468896 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-01 22:49:15.468907 | orchestrator | Sunday 01 June 2025 22:48:48 +0000 (0:00:02.962) 0:00:05.858 *********** 2025-06-01 22:49:15.468918 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:49:15.468930 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:49:15.468940 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:49:15.468951 | orchestrator | 2025-06-01 22:49:15.468962 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-01 22:49:15.468973 | orchestrator | Sunday 01 June 2025 22:48:51 +0000 (0:00:03.095) 0:00:08.954 *********** 2025-06-01 22:49:15.468984 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:49:15.468995 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:49:15.469006 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:49:15.469017 | orchestrator | 2025-06-01 22:49:15.469028 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:49:15.469040 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:49:15.469055 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:49:15.469068 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:49:15.469081 | orchestrator | 2025-06-01 22:49:15.469094 | orchestrator | 2025-06-01 22:49:15.469107 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:49:15.469120 | orchestrator | Sunday 01 June 2025 22:48:54 +0000 (0:00:02.910) 0:00:11.864 *********** 2025-06-01 22:49:15.469132 | orchestrator | =============================================================================== 2025-06-01 22:49:15.469145 | orchestrator | memcached : Check memcached container ----------------------------------- 3.10s 2025-06-01 22:49:15.469158 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.96s 2025-06-01 22:49:15.469170 | orchestrator | memcached : Restart memcached container --------------------------------- 2.91s 2025-06-01 22:49:15.469182 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.81s 2025-06-01 22:49:15.469195 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-06-01 22:49:15.469207 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.52s 2025-06-01 22:49:15.469219 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.47s 2025-06-01 22:49:15.469231 | orchestrator | 2025-06-01 22:49:15.469438 | orchestrator | 2025-06-01 22:49:15.469453 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 22:49:15.469464 | orchestrator | 2025-06-01 22:49:15.469475 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 22:49:15.469486 | orchestrator | Sunday 01 June 2025 22:48:43 +0000 (0:00:00.364) 0:00:00.364 *********** 2025-06-01 22:49:15.469497 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:49:15.469508 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:49:15.469519 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:49:15.469530 | orchestrator | 2025-06-01 22:49:15.469541 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 22:49:15.469552 | orchestrator | Sunday 01 June 2025 22:48:43 +0000 (0:00:00.561) 0:00:00.925 *********** 2025-06-01 22:49:15.469562 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-01 22:49:15.469573 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-01 22:49:15.469584 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-01 22:49:15.469595 | orchestrator | 2025-06-01 22:49:15.469614 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-01 22:49:15.469626 | orchestrator | 2025-06-01 22:49:15.469637 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-01 22:49:15.469666 | orchestrator | Sunday 01 June 2025 22:48:44 +0000 (0:00:00.465) 0:00:01.391 *********** 2025-06-01 22:49:15.469678 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:49:15.469689 | orchestrator | 2025-06-01 22:49:15.469700 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-01 22:49:15.469711 | orchestrator | Sunday 01 June 2025 22:48:44 +0000 (0:00:00.810) 0:00:02.201 *********** 2025-06-01 22:49:15.469733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.469751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.469763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.469775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.469800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.469813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.469832 | orchestrator | 2025-06-01 22:49:15.469843 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-01 22:49:15.469854 | orchestrator | Sunday 01 June 2025 22:48:46 +0000 (0:00:01.361) 0:00:03.563 *********** 2025-06-01 22:49:15.469866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.469878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.469889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.469901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.469918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.469937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.469948 | orchestrator | 2025-06-01 22:49:15.469960 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-01 22:49:15.469971 | orchestrator | Sunday 01 June 2025 22:48:49 +0000 (0:00:03.540) 0:00:07.104 *********** 2025-06-01 22:49:15.469994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.470006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.470093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.470108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.470127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.470149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.470161 | orchestrator | 2025-06-01 22:49:15.470173 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-01 22:49:15.470186 | orchestrator | Sunday 01 June 2025 22:48:53 +0000 (0:00:03.879) 0:00:10.984 *********** 2025-06-01 22:49:15.470205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.470219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.470232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.470245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.470258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.470284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 22:49:15.470297 | orchestrator | 2025-06-01 22:49:15.470310 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-01 22:49:15.470323 | orchestrator | Sunday 01 June 2025 22:48:55 +0000 (0:00:02.231) 0:00:13.216 *********** 2025-06-01 22:49:15.470336 | orchestrator | 2025-06-01 22:49:15.470348 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-01 22:49:15.470361 | orchestrator | Sunday 01 June 2025 22:48:55 +0000 (0:00:00.063) 0:00:13.279 *********** 2025-06-01 22:49:15.470373 | orchestrator | 2025-06-01 22:49:15.470385 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-01 22:49:15.470398 | orchestrator | Sunday 01 June 2025 22:48:56 +0000 (0:00:00.166) 0:00:13.446 *********** 2025-06-01 22:49:15.470410 | orchestrator | 2025-06-01 22:49:15.470423 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-01 22:49:15.470439 | orchestrator | Sunday 01 June 2025 22:48:56 +0000 (0:00:00.134) 0:00:13.582 *********** 2025-06-01 22:49:15.470450 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:49:15.470461 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:49:15.470472 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:49:15.470483 | orchestrator | 2025-06-01 22:49:15.470494 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-01 22:49:15.470505 | orchestrator | Sunday 01 June 2025 22:49:03 +0000 (0:00:07.329) 0:00:20.911 *********** 2025-06-01 22:49:15.470516 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:49:15.470527 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:49:15.470538 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:49:15.470548 | orchestrator | 2025-06-01 22:49:15.470559 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:49:15.470570 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:49:15.470582 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:49:15.470593 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:49:15.470604 | orchestrator | 2025-06-01 22:49:15.470615 | orchestrator | 2025-06-01 22:49:15.470626 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:49:15.470636 | orchestrator | Sunday 01 June 2025 22:49:12 +0000 (0:00:09.392) 0:00:30.303 *********** 2025-06-01 22:49:15.470679 | orchestrator | =============================================================================== 2025-06-01 22:49:15.470691 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.39s 2025-06-01 22:49:15.470702 | orchestrator | redis : Restart redis container ----------------------------------------- 7.33s 2025-06-01 22:49:15.470712 | orchestrator | redis : Copying over redis config files --------------------------------- 3.88s 2025-06-01 22:49:15.470732 | orchestrator | redis : Copying over default config.json files -------------------------- 3.54s 2025-06-01 22:49:15.470743 | orchestrator | redis : Check redis containers ------------------------------------------ 2.23s 2025-06-01 22:49:15.470754 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.36s 2025-06-01 22:49:15.470764 | orchestrator | redis : include_tasks --------------------------------------------------- 0.81s 2025-06-01 22:49:15.470775 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.56s 2025-06-01 22:49:15.470786 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-06-01 22:49:15.470797 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.37s 2025-06-01 22:49:15.470812 | orchestrator | 2025-06-01 22:49:15 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:15.473417 | orchestrator | 2025-06-01 22:49:15 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:15.475550 | orchestrator | 2025-06-01 22:49:15 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:15.476764 | orchestrator | 2025-06-01 22:49:15 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:15.477969 | orchestrator | 2025-06-01 22:49:15 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:15.477990 | orchestrator | 2025-06-01 22:49:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:18.519040 | orchestrator | 2025-06-01 22:49:18 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:18.519436 | orchestrator | 2025-06-01 22:49:18 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:18.520910 | orchestrator | 2025-06-01 22:49:18 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:18.522164 | orchestrator | 2025-06-01 22:49:18 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:18.524115 | orchestrator | 2025-06-01 22:49:18 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:18.524133 | orchestrator | 2025-06-01 22:49:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:21.566288 | orchestrator | 2025-06-01 22:49:21 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:21.566429 | orchestrator | 2025-06-01 22:49:21 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:21.566445 | orchestrator | 2025-06-01 22:49:21 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:21.566472 | orchestrator | 2025-06-01 22:49:21 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:21.567507 | orchestrator | 2025-06-01 22:49:21 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:21.567786 | orchestrator | 2025-06-01 22:49:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:24.608995 | orchestrator | 2025-06-01 22:49:24 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:24.610101 | orchestrator | 2025-06-01 22:49:24 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:24.611263 | orchestrator | 2025-06-01 22:49:24 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:24.612284 | orchestrator | 2025-06-01 22:49:24 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:24.613127 | orchestrator | 2025-06-01 22:49:24 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:24.613365 | orchestrator | 2025-06-01 22:49:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:27.637952 | orchestrator | 2025-06-01 22:49:27 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:27.639085 | orchestrator | 2025-06-01 22:49:27 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:27.640580 | orchestrator | 2025-06-01 22:49:27 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:27.641871 | orchestrator | 2025-06-01 22:49:27 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:27.643370 | orchestrator | 2025-06-01 22:49:27 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:27.643539 | orchestrator | 2025-06-01 22:49:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:30.685985 | orchestrator | 2025-06-01 22:49:30 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:30.689157 | orchestrator | 2025-06-01 22:49:30 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:30.691588 | orchestrator | 2025-06-01 22:49:30 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:30.696688 | orchestrator | 2025-06-01 22:49:30 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:30.697766 | orchestrator | 2025-06-01 22:49:30 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:30.697908 | orchestrator | 2025-06-01 22:49:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:33.741559 | orchestrator | 2025-06-01 22:49:33 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:33.741793 | orchestrator | 2025-06-01 22:49:33 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:33.744871 | orchestrator | 2025-06-01 22:49:33 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:33.746552 | orchestrator | 2025-06-01 22:49:33 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:33.749427 | orchestrator | 2025-06-01 22:49:33 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:33.749452 | orchestrator | 2025-06-01 22:49:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:36.788012 | orchestrator | 2025-06-01 22:49:36 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:36.788141 | orchestrator | 2025-06-01 22:49:36 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:36.789032 | orchestrator | 2025-06-01 22:49:36 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:36.791393 | orchestrator | 2025-06-01 22:49:36 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:36.795899 | orchestrator | 2025-06-01 22:49:36 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:36.795923 | orchestrator | 2025-06-01 22:49:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:39.837073 | orchestrator | 2025-06-01 22:49:39 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:39.839815 | orchestrator | 2025-06-01 22:49:39 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:39.840828 | orchestrator | 2025-06-01 22:49:39 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:39.843095 | orchestrator | 2025-06-01 22:49:39 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:39.844923 | orchestrator | 2025-06-01 22:49:39 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:39.845216 | orchestrator | 2025-06-01 22:49:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:42.883303 | orchestrator | 2025-06-01 22:49:42 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:42.883885 | orchestrator | 2025-06-01 22:49:42 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:42.884996 | orchestrator | 2025-06-01 22:49:42 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:42.887228 | orchestrator | 2025-06-01 22:49:42 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:42.889597 | orchestrator | 2025-06-01 22:49:42 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:42.890220 | orchestrator | 2025-06-01 22:49:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:45.934637 | orchestrator | 2025-06-01 22:49:45 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:45.936150 | orchestrator | 2025-06-01 22:49:45 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:45.937785 | orchestrator | 2025-06-01 22:49:45 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:45.940124 | orchestrator | 2025-06-01 22:49:45 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:45.941909 | orchestrator | 2025-06-01 22:49:45 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:45.942175 | orchestrator | 2025-06-01 22:49:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:48.995149 | orchestrator | 2025-06-01 22:49:48 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:48.995284 | orchestrator | 2025-06-01 22:49:48 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:48.996713 | orchestrator | 2025-06-01 22:49:48 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:48.997579 | orchestrator | 2025-06-01 22:49:48 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:48.998457 | orchestrator | 2025-06-01 22:49:48 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:48.998481 | orchestrator | 2025-06-01 22:49:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:52.040441 | orchestrator | 2025-06-01 22:49:52 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:52.042490 | orchestrator | 2025-06-01 22:49:52 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:52.044574 | orchestrator | 2025-06-01 22:49:52 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:52.046265 | orchestrator | 2025-06-01 22:49:52 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:52.048531 | orchestrator | 2025-06-01 22:49:52 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:52.048737 | orchestrator | 2025-06-01 22:49:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:55.099267 | orchestrator | 2025-06-01 22:49:55 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:55.099783 | orchestrator | 2025-06-01 22:49:55 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:55.105206 | orchestrator | 2025-06-01 22:49:55 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:55.107077 | orchestrator | 2025-06-01 22:49:55 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state STARTED 2025-06-01 22:49:55.111758 | orchestrator | 2025-06-01 22:49:55 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:55.111794 | orchestrator | 2025-06-01 22:49:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:49:58.145112 | orchestrator | 2025-06-01 22:49:58 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:49:58.145960 | orchestrator | 2025-06-01 22:49:58 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:49:58.146767 | orchestrator | 2025-06-01 22:49:58 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:49:58.147795 | orchestrator | 2025-06-01 22:49:58 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:49:58.149849 | orchestrator | 2025-06-01 22:49:58 | INFO  | Task 25272758-2746-4b9c-bdeb-63761711612d is in state SUCCESS 2025-06-01 22:49:58.151476 | orchestrator | 2025-06-01 22:49:58.151510 | orchestrator | 2025-06-01 22:49:58.151524 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 22:49:58.151536 | orchestrator | 2025-06-01 22:49:58.151547 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 22:49:58.151559 | orchestrator | Sunday 01 June 2025 22:48:43 +0000 (0:00:00.475) 0:00:00.475 *********** 2025-06-01 22:49:58.151571 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:49:58.151583 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:49:58.151594 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:49:58.151605 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:49:58.151616 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:49:58.151626 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:49:58.151638 | orchestrator | 2025-06-01 22:49:58.151683 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 22:49:58.151695 | orchestrator | Sunday 01 June 2025 22:48:44 +0000 (0:00:01.109) 0:00:01.584 *********** 2025-06-01 22:49:58.151707 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 22:49:58.151718 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 22:49:58.151729 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 22:49:58.151740 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 22:49:58.151751 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 22:49:58.151762 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 22:49:58.151773 | orchestrator | 2025-06-01 22:49:58.151784 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-01 22:49:58.151795 | orchestrator | 2025-06-01 22:49:58.151806 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-01 22:49:58.151817 | orchestrator | Sunday 01 June 2025 22:48:45 +0000 (0:00:00.737) 0:00:02.322 *********** 2025-06-01 22:49:58.151830 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:49:58.151843 | orchestrator | 2025-06-01 22:49:58.151854 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-01 22:49:58.151865 | orchestrator | Sunday 01 June 2025 22:48:46 +0000 (0:00:01.888) 0:00:04.211 *********** 2025-06-01 22:49:58.151876 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-01 22:49:58.151910 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-01 22:49:58.151922 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-01 22:49:58.151933 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-01 22:49:58.151944 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-01 22:49:58.151954 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-01 22:49:58.151965 | orchestrator | 2025-06-01 22:49:58.151976 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-01 22:49:58.151987 | orchestrator | Sunday 01 June 2025 22:48:50 +0000 (0:00:03.174) 0:00:07.386 *********** 2025-06-01 22:49:58.151998 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-01 22:49:58.152010 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-01 22:49:58.152021 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-01 22:49:58.152031 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-01 22:49:58.152042 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-01 22:49:58.152053 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-01 22:49:58.152064 | orchestrator | 2025-06-01 22:49:58.152075 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-01 22:49:58.152086 | orchestrator | Sunday 01 June 2025 22:48:52 +0000 (0:00:02.630) 0:00:10.016 *********** 2025-06-01 22:49:58.152097 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-01 22:49:58.152108 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:49:58.152119 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-01 22:49:58.152130 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:49:58.152141 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-01 22:49:58.152152 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:49:58.152163 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-01 22:49:58.152174 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:49:58.152185 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-01 22:49:58.152196 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:49:58.152207 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-01 22:49:58.152217 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:49:58.152228 | orchestrator | 2025-06-01 22:49:58.152239 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-01 22:49:58.152250 | orchestrator | Sunday 01 June 2025 22:48:54 +0000 (0:00:02.221) 0:00:12.238 *********** 2025-06-01 22:49:58.152261 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:49:58.152272 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:49:58.152283 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:49:58.152293 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:49:58.152304 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:49:58.152315 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:49:58.152326 | orchestrator | 2025-06-01 22:49:58.152337 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-01 22:49:58.152348 | orchestrator | Sunday 01 June 2025 22:48:55 +0000 (0:00:00.998) 0:00:13.237 *********** 2025-06-01 22:49:58.152383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152555 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152573 | orchestrator | 2025-06-01 22:49:58.152585 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-01 22:49:58.152596 | orchestrator | Sunday 01 June 2025 22:48:58 +0000 (0:00:02.928) 0:00:16.165 *********** 2025-06-01 22:49:58.152608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152659 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152671 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152726 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152796 | orchestrator | 2025-06-01 22:49:58.152807 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-01 22:49:58.152819 | orchestrator | Sunday 01 June 2025 22:49:02 +0000 (0:00:03.233) 0:00:19.399 *********** 2025-06-01 22:49:58.152830 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:49:58.152841 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:49:58.152852 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:49:58.152863 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:49:58.152874 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:49:58.152885 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:49:58.152895 | orchestrator | 2025-06-01 22:49:58.152906 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-01 22:49:58.152917 | orchestrator | Sunday 01 June 2025 22:49:03 +0000 (0:00:00.958) 0:00:20.357 *********** 2025-06-01 22:49:58.152936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.152999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.153011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.153022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.153033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.153044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.153081 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.153094 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 22:49:58.153105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 22:49:58.153117 | orchestrator | 2025-06-01 22:49:58.153128 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 22:49:58.153139 | orchestrator | Sunday 01 June 2025 22:49:07 +0000 (0:00:04.299) 0:00:24.656 *********** 2025-06-01 22:49:58.153150 | orchestrator | 2025-06-01 22:49:58.153161 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 22:49:58.153172 | orchestrator | Sunday 01 June 2025 22:49:07 +0000 (0:00:00.175) 0:00:24.832 *********** 2025-06-01 22:49:58.153183 | orchestrator | 2025-06-01 22:49:58.153194 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 22:49:58.153204 | orchestrator | Sunday 01 June 2025 22:49:07 +0000 (0:00:00.226) 0:00:25.058 *********** 2025-06-01 22:49:58.153215 | orchestrator | 2025-06-01 22:49:58.153226 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 22:49:58.153237 | orchestrator | Sunday 01 June 2025 22:49:08 +0000 (0:00:00.360) 0:00:25.419 *********** 2025-06-01 22:49:58.153247 | orchestrator | 2025-06-01 22:49:58.153258 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 22:49:58.153270 | orchestrator | Sunday 01 June 2025 22:49:08 +0000 (0:00:00.373) 0:00:25.792 *********** 2025-06-01 22:49:58.153289 | orchestrator | 2025-06-01 22:49:58.153307 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 22:49:58.153343 | orchestrator | Sunday 01 June 2025 22:49:08 +0000 (0:00:00.215) 0:00:26.008 *********** 2025-06-01 22:49:58.153367 | orchestrator | 2025-06-01 22:49:58.153384 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-01 22:49:58.153401 | orchestrator | Sunday 01 June 2025 22:49:09 +0000 (0:00:00.648) 0:00:26.657 *********** 2025-06-01 22:49:58.153417 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:49:58.153434 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:49:58.153452 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:49:58.153468 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:49:58.153487 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:49:58.153506 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:49:58.153524 | orchestrator | 2025-06-01 22:49:58.153542 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-01 22:49:58.153559 | orchestrator | Sunday 01 June 2025 22:49:20 +0000 (0:00:10.607) 0:00:37.264 *********** 2025-06-01 22:49:58.153571 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:49:58.153582 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:49:58.153592 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:49:58.153603 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:49:58.153614 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:49:58.153624 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:49:58.153635 | orchestrator | 2025-06-01 22:49:58.153672 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-01 22:49:58.153684 | orchestrator | Sunday 01 June 2025 22:49:22 +0000 (0:00:02.936) 0:00:40.200 *********** 2025-06-01 22:49:58.153695 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:49:58.153705 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:49:58.153716 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:49:58.153727 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:49:58.153738 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:49:58.153748 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:49:58.153759 | orchestrator | 2025-06-01 22:49:58.153770 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-01 22:49:58.153781 | orchestrator | Sunday 01 June 2025 22:49:33 +0000 (0:00:11.000) 0:00:51.201 *********** 2025-06-01 22:49:58.153808 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-01 22:49:58.153865 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-01 22:49:58.153877 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-01 22:49:58.153888 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-01 22:49:58.153901 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-01 22:49:58.153920 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-01 22:49:58.153937 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-01 22:49:58.153950 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-01 22:49:58.153969 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-01 22:49:58.153981 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-01 22:49:58.153991 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-01 22:49:58.154002 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-01 22:49:58.154057 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 22:49:58.154081 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 22:49:58.154093 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 22:49:58.154103 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 22:49:58.154114 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 22:49:58.154125 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 22:49:58.154136 | orchestrator | 2025-06-01 22:49:58.154147 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-01 22:49:58.154159 | orchestrator | Sunday 01 June 2025 22:49:41 +0000 (0:00:07.552) 0:00:58.753 *********** 2025-06-01 22:49:58.154170 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-01 22:49:58.154181 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:49:58.154193 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-01 22:49:58.154204 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:49:58.154221 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-01 22:49:58.154239 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:49:58.154257 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-01 22:49:58.154274 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-01 22:49:58.154302 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-01 22:49:58.154323 | orchestrator | 2025-06-01 22:49:58.154343 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-01 22:49:58.154361 | orchestrator | Sunday 01 June 2025 22:49:43 +0000 (0:00:02.294) 0:01:01.048 *********** 2025-06-01 22:49:58.154380 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-01 22:49:58.154391 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:49:58.154402 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-01 22:49:58.154413 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:49:58.154423 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-01 22:49:58.154434 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:49:58.154445 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-01 22:49:58.154456 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-01 22:49:58.154466 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-01 22:49:58.154477 | orchestrator | 2025-06-01 22:49:58.154488 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-01 22:49:58.154498 | orchestrator | Sunday 01 June 2025 22:49:47 +0000 (0:00:03.810) 0:01:04.858 *********** 2025-06-01 22:49:58.154509 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:49:58.154520 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:49:58.154531 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:49:58.154541 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:49:58.154552 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:49:58.154562 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:49:58.154573 | orchestrator | 2025-06-01 22:49:58.154584 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:49:58.154595 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 22:49:58.154624 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 22:49:58.154637 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 22:49:58.154685 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:49:58.154697 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:49:58.154708 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:49:58.154719 | orchestrator | 2025-06-01 22:49:58.154730 | orchestrator | 2025-06-01 22:49:58.154740 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:49:58.154751 | orchestrator | Sunday 01 June 2025 22:49:55 +0000 (0:00:07.609) 0:01:12.468 *********** 2025-06-01 22:49:58.154762 | orchestrator | =============================================================================== 2025-06-01 22:49:58.154773 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.61s 2025-06-01 22:49:58.154783 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.61s 2025-06-01 22:49:58.154794 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.55s 2025-06-01 22:49:58.154805 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 4.30s 2025-06-01 22:49:58.154815 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.81s 2025-06-01 22:49:58.154826 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.23s 2025-06-01 22:49:58.154837 | orchestrator | module-load : Load modules ---------------------------------------------- 3.17s 2025-06-01 22:49:58.154847 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.94s 2025-06-01 22:49:58.154858 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.93s 2025-06-01 22:49:58.154869 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.63s 2025-06-01 22:49:58.154879 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.29s 2025-06-01 22:49:58.154890 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.22s 2025-06-01 22:49:58.154903 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.00s 2025-06-01 22:49:58.154921 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.89s 2025-06-01 22:49:58.154938 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.11s 2025-06-01 22:49:58.154952 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.00s 2025-06-01 22:49:58.154963 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.96s 2025-06-01 22:49:58.154974 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-06-01 22:49:58.154985 | orchestrator | 2025-06-01 22:49:58 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:49:58.154996 | orchestrator | 2025-06-01 22:49:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:01.191165 | orchestrator | 2025-06-01 22:50:01 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:01.192027 | orchestrator | 2025-06-01 22:50:01 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:01.194991 | orchestrator | 2025-06-01 22:50:01 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:01.196434 | orchestrator | 2025-06-01 22:50:01 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:01.198382 | orchestrator | 2025-06-01 22:50:01 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:50:01.198403 | orchestrator | 2025-06-01 22:50:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:04.246246 | orchestrator | 2025-06-01 22:50:04 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:04.246467 | orchestrator | 2025-06-01 22:50:04 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:04.247532 | orchestrator | 2025-06-01 22:50:04 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:04.248300 | orchestrator | 2025-06-01 22:50:04 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:04.248977 | orchestrator | 2025-06-01 22:50:04 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:50:04.248996 | orchestrator | 2025-06-01 22:50:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:07.292719 | orchestrator | 2025-06-01 22:50:07 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:07.294590 | orchestrator | 2025-06-01 22:50:07 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:07.297008 | orchestrator | 2025-06-01 22:50:07 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:07.298449 | orchestrator | 2025-06-01 22:50:07 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:07.300458 | orchestrator | 2025-06-01 22:50:07 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:50:07.300490 | orchestrator | 2025-06-01 22:50:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:10.335236 | orchestrator | 2025-06-01 22:50:10 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:10.336158 | orchestrator | 2025-06-01 22:50:10 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:10.336185 | orchestrator | 2025-06-01 22:50:10 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:10.339368 | orchestrator | 2025-06-01 22:50:10 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:10.339800 | orchestrator | 2025-06-01 22:50:10 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:50:10.339819 | orchestrator | 2025-06-01 22:50:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:13.383928 | orchestrator | 2025-06-01 22:50:13 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:13.386847 | orchestrator | 2025-06-01 22:50:13 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:13.387139 | orchestrator | 2025-06-01 22:50:13 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:13.389668 | orchestrator | 2025-06-01 22:50:13 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:13.391025 | orchestrator | 2025-06-01 22:50:13 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:50:13.391387 | orchestrator | 2025-06-01 22:50:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:16.457863 | orchestrator | 2025-06-01 22:50:16 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:16.457970 | orchestrator | 2025-06-01 22:50:16 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:16.457985 | orchestrator | 2025-06-01 22:50:16 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:16.460334 | orchestrator | 2025-06-01 22:50:16 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:16.460917 | orchestrator | 2025-06-01 22:50:16 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:50:16.460944 | orchestrator | 2025-06-01 22:50:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:19.503433 | orchestrator | 2025-06-01 22:50:19 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:19.503535 | orchestrator | 2025-06-01 22:50:19 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:19.506181 | orchestrator | 2025-06-01 22:50:19 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:19.506207 | orchestrator | 2025-06-01 22:50:19 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:19.507332 | orchestrator | 2025-06-01 22:50:19 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:50:19.507355 | orchestrator | 2025-06-01 22:50:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:22.554398 | orchestrator | 2025-06-01 22:50:22 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:22.554507 | orchestrator | 2025-06-01 22:50:22 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:22.554522 | orchestrator | 2025-06-01 22:50:22 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:22.555139 | orchestrator | 2025-06-01 22:50:22 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:22.555918 | orchestrator | 2025-06-01 22:50:22 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:50:22.555957 | orchestrator | 2025-06-01 22:50:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:25.631534 | orchestrator | 2025-06-01 22:50:25 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:25.631707 | orchestrator | 2025-06-01 22:50:25 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:25.636496 | orchestrator | 2025-06-01 22:50:25 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:25.638455 | orchestrator | 2025-06-01 22:50:25 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:25.639747 | orchestrator | 2025-06-01 22:50:25 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:50:25.639931 | orchestrator | 2025-06-01 22:50:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:28.673969 | orchestrator | 2025-06-01 22:50:28 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:28.674134 | orchestrator | 2025-06-01 22:50:28 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:28.674754 | orchestrator | 2025-06-01 22:50:28 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:28.675190 | orchestrator | 2025-06-01 22:50:28 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:28.678396 | orchestrator | 2025-06-01 22:50:28 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:50:28.681776 | orchestrator | 2025-06-01 22:50:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:31.711970 | orchestrator | 2025-06-01 22:50:31 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:31.712080 | orchestrator | 2025-06-01 22:50:31 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:31.713243 | orchestrator | 2025-06-01 22:50:31 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:31.715857 | orchestrator | 2025-06-01 22:50:31 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:31.717996 | orchestrator | 2025-06-01 22:50:31 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:50:31.718064 | orchestrator | 2025-06-01 22:50:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:34.760142 | orchestrator | 2025-06-01 22:50:34 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:34.761223 | orchestrator | 2025-06-01 22:50:34 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:34.761570 | orchestrator | 2025-06-01 22:50:34 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:34.762326 | orchestrator | 2025-06-01 22:50:34 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:34.763101 | orchestrator | 2025-06-01 22:50:34 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state STARTED 2025-06-01 22:50:34.763122 | orchestrator | 2025-06-01 22:50:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:37.797977 | orchestrator | 2025-06-01 22:50:37 | INFO  | Task 9f3306f5-0f8e-497a-a3b0-0973bd5c67cc is in state STARTED 2025-06-01 22:50:37.801764 | orchestrator | 2025-06-01 22:50:37 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:37.801795 | orchestrator | 2025-06-01 22:50:37 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:37.801807 | orchestrator | 2025-06-01 22:50:37 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:37.801818 | orchestrator | 2025-06-01 22:50:37 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:37.801829 | orchestrator | 2025-06-01 22:50:37 | INFO  | Task 379e48b1-f8c2-4fe7-86c7-9d11058c42e4 is in state STARTED 2025-06-01 22:50:37.801840 | orchestrator | 2025-06-01 22:50:37 | INFO  | Task 0ee1b0cd-9c8b-43a6-b5ed-385dd3488c56 is in state SUCCESS 2025-06-01 22:50:37.801851 | orchestrator | 2025-06-01 22:50:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:37.803129 | orchestrator | 2025-06-01 22:50:37.803233 | orchestrator | 2025-06-01 22:50:37.803249 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-01 22:50:37.803262 | orchestrator | 2025-06-01 22:50:37.803274 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-01 22:50:37.803286 | orchestrator | Sunday 01 June 2025 22:46:15 +0000 (0:00:00.185) 0:00:00.185 *********** 2025-06-01 22:50:37.803297 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:50:37.803309 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:50:37.803320 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:50:37.803348 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.803360 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.803371 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.803382 | orchestrator | 2025-06-01 22:50:37.803393 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-01 22:50:37.803404 | orchestrator | Sunday 01 June 2025 22:46:16 +0000 (0:00:00.792) 0:00:00.977 *********** 2025-06-01 22:50:37.803415 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.803427 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.803437 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.803448 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.803459 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.803470 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.803480 | orchestrator | 2025-06-01 22:50:37.803491 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-01 22:50:37.803526 | orchestrator | Sunday 01 June 2025 22:46:17 +0000 (0:00:00.671) 0:00:01.648 *********** 2025-06-01 22:50:37.803537 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.803548 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.803559 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.803569 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.803580 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.803590 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.803601 | orchestrator | 2025-06-01 22:50:37.803612 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-01 22:50:37.803622 | orchestrator | Sunday 01 June 2025 22:46:18 +0000 (0:00:00.882) 0:00:02.530 *********** 2025-06-01 22:50:37.803677 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:50:37.803691 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:50:37.803703 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.803716 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.803729 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.803741 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:50:37.803753 | orchestrator | 2025-06-01 22:50:37.803766 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-01 22:50:37.803778 | orchestrator | Sunday 01 June 2025 22:46:20 +0000 (0:00:02.243) 0:00:04.773 *********** 2025-06-01 22:50:37.803790 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:50:37.803803 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:50:37.803815 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:50:37.803828 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.803840 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.803852 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.803865 | orchestrator | 2025-06-01 22:50:37.803877 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-01 22:50:37.803889 | orchestrator | Sunday 01 June 2025 22:46:21 +0000 (0:00:01.185) 0:00:05.959 *********** 2025-06-01 22:50:37.803901 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:50:37.803913 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:50:37.803926 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:50:37.803938 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.803950 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.803962 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.803974 | orchestrator | 2025-06-01 22:50:37.803987 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-01 22:50:37.804000 | orchestrator | Sunday 01 June 2025 22:46:22 +0000 (0:00:01.077) 0:00:07.037 *********** 2025-06-01 22:50:37.804012 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.804024 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.804035 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.804046 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.804056 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.804066 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.804077 | orchestrator | 2025-06-01 22:50:37.804088 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-01 22:50:37.804099 | orchestrator | Sunday 01 June 2025 22:46:23 +0000 (0:00:00.873) 0:00:07.910 *********** 2025-06-01 22:50:37.804109 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.804120 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.804131 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.804141 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.804152 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.804162 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.804173 | orchestrator | 2025-06-01 22:50:37.804183 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-01 22:50:37.804194 | orchestrator | Sunday 01 June 2025 22:46:24 +0000 (0:00:01.072) 0:00:08.983 *********** 2025-06-01 22:50:37.804214 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 22:50:37.804224 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 22:50:37.804235 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.804246 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 22:50:37.804257 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 22:50:37.804267 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.804278 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 22:50:37.804289 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 22:50:37.804299 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.804310 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 22:50:37.804339 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 22:50:37.804351 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.804361 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 22:50:37.804372 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 22:50:37.804383 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.804400 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 22:50:37.804411 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 22:50:37.804422 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.804433 | orchestrator | 2025-06-01 22:50:37.804443 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-01 22:50:37.804454 | orchestrator | Sunday 01 June 2025 22:46:25 +0000 (0:00:01.113) 0:00:10.096 *********** 2025-06-01 22:50:37.804465 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.804476 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.804486 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.804497 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.804508 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.804518 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.804529 | orchestrator | 2025-06-01 22:50:37.804540 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-01 22:50:37.804552 | orchestrator | Sunday 01 June 2025 22:46:26 +0000 (0:00:01.288) 0:00:11.385 *********** 2025-06-01 22:50:37.804563 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:50:37.804574 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:50:37.804584 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:50:37.804595 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.804606 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.804617 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.804627 | orchestrator | 2025-06-01 22:50:37.804655 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-01 22:50:37.804666 | orchestrator | Sunday 01 June 2025 22:46:27 +0000 (0:00:00.579) 0:00:11.964 *********** 2025-06-01 22:50:37.804677 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.804688 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:50:37.804699 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:50:37.804709 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.804720 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:50:37.804730 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.804741 | orchestrator | 2025-06-01 22:50:37.804752 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-01 22:50:37.804763 | orchestrator | Sunday 01 June 2025 22:46:33 +0000 (0:00:06.212) 0:00:18.177 *********** 2025-06-01 22:50:37.804773 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.804784 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.804802 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.804813 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.804824 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.804835 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.804845 | orchestrator | 2025-06-01 22:50:37.804856 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-01 22:50:37.804867 | orchestrator | Sunday 01 June 2025 22:46:35 +0000 (0:00:01.406) 0:00:19.583 *********** 2025-06-01 22:50:37.804878 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.804888 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.804899 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.804910 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.804920 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.804931 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.804942 | orchestrator | 2025-06-01 22:50:37.804953 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-01 22:50:37.804965 | orchestrator | Sunday 01 June 2025 22:46:36 +0000 (0:00:01.534) 0:00:21.117 *********** 2025-06-01 22:50:37.804976 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.804986 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.804997 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.805007 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.805018 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.805029 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.805040 | orchestrator | 2025-06-01 22:50:37.805050 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-01 22:50:37.805061 | orchestrator | Sunday 01 June 2025 22:46:37 +0000 (0:00:01.049) 0:00:22.166 *********** 2025-06-01 22:50:37.805072 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-01 22:50:37.805083 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-01 22:50:37.805094 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.805104 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-01 22:50:37.805115 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-01 22:50:37.805126 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.805137 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-01 22:50:37.805147 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-01 22:50:37.805158 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.805169 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-01 22:50:37.805179 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-01 22:50:37.805190 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.805201 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-01 22:50:37.805211 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-01 22:50:37.805222 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.805233 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-01 22:50:37.805244 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-01 22:50:37.805254 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.805265 | orchestrator | 2025-06-01 22:50:37.805276 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-01 22:50:37.805294 | orchestrator | Sunday 01 June 2025 22:46:38 +0000 (0:00:01.030) 0:00:23.197 *********** 2025-06-01 22:50:37.805306 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.805317 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.805327 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.805338 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.805348 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.805359 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.805369 | orchestrator | 2025-06-01 22:50:37.805380 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-01 22:50:37.805398 | orchestrator | 2025-06-01 22:50:37.805414 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-01 22:50:37.805425 | orchestrator | Sunday 01 June 2025 22:46:40 +0000 (0:00:01.742) 0:00:24.939 *********** 2025-06-01 22:50:37.805436 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.805446 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.805457 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.805468 | orchestrator | 2025-06-01 22:50:37.805479 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-01 22:50:37.805490 | orchestrator | Sunday 01 June 2025 22:46:41 +0000 (0:00:01.560) 0:00:26.500 *********** 2025-06-01 22:50:37.805500 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.805511 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.805521 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.805532 | orchestrator | 2025-06-01 22:50:37.805543 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-01 22:50:37.805553 | orchestrator | Sunday 01 June 2025 22:46:43 +0000 (0:00:01.255) 0:00:27.756 *********** 2025-06-01 22:50:37.805564 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.805574 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.805585 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.805595 | orchestrator | 2025-06-01 22:50:37.805606 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-01 22:50:37.805617 | orchestrator | Sunday 01 June 2025 22:46:44 +0000 (0:00:01.145) 0:00:28.902 *********** 2025-06-01 22:50:37.805627 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.805866 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.805935 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.805950 | orchestrator | 2025-06-01 22:50:37.805964 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-01 22:50:37.805977 | orchestrator | Sunday 01 June 2025 22:46:45 +0000 (0:00:00.915) 0:00:29.818 *********** 2025-06-01 22:50:37.805988 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.806000 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.806011 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.806102 | orchestrator | 2025-06-01 22:50:37.806115 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-01 22:50:37.806127 | orchestrator | Sunday 01 June 2025 22:46:45 +0000 (0:00:00.249) 0:00:30.067 *********** 2025-06-01 22:50:37.806139 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:50:37.806151 | orchestrator | 2025-06-01 22:50:37.806162 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-01 22:50:37.806173 | orchestrator | Sunday 01 June 2025 22:46:46 +0000 (0:00:00.516) 0:00:30.583 *********** 2025-06-01 22:50:37.806184 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.806195 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.806206 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.806217 | orchestrator | 2025-06-01 22:50:37.806229 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-01 22:50:37.806239 | orchestrator | Sunday 01 June 2025 22:46:48 +0000 (0:00:02.650) 0:00:33.233 *********** 2025-06-01 22:50:37.806250 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.806261 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.806272 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.806283 | orchestrator | 2025-06-01 22:50:37.806294 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-01 22:50:37.806304 | orchestrator | Sunday 01 June 2025 22:46:49 +0000 (0:00:00.846) 0:00:34.080 *********** 2025-06-01 22:50:37.806315 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.806326 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.806336 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.806347 | orchestrator | 2025-06-01 22:50:37.806358 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-01 22:50:37.806405 | orchestrator | Sunday 01 June 2025 22:46:50 +0000 (0:00:01.048) 0:00:35.129 *********** 2025-06-01 22:50:37.806416 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.806427 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.806437 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.806448 | orchestrator | 2025-06-01 22:50:37.806459 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-01 22:50:37.806470 | orchestrator | Sunday 01 June 2025 22:46:52 +0000 (0:00:01.916) 0:00:37.046 *********** 2025-06-01 22:50:37.806480 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.806491 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.806502 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.806513 | orchestrator | 2025-06-01 22:50:37.806523 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-01 22:50:37.806534 | orchestrator | Sunday 01 June 2025 22:46:52 +0000 (0:00:00.406) 0:00:37.452 *********** 2025-06-01 22:50:37.806544 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.806555 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.806566 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.806576 | orchestrator | 2025-06-01 22:50:37.806587 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-01 22:50:37.806598 | orchestrator | Sunday 01 June 2025 22:46:53 +0000 (0:00:00.441) 0:00:37.894 *********** 2025-06-01 22:50:37.806609 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.806620 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.806631 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.806689 | orchestrator | 2025-06-01 22:50:37.806701 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-01 22:50:37.806712 | orchestrator | Sunday 01 June 2025 22:46:55 +0000 (0:00:02.563) 0:00:40.457 *********** 2025-06-01 22:50:37.806757 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-01 22:50:37.806771 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-01 22:50:37.806800 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-01 22:50:37.806812 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-01 22:50:37.806823 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-01 22:50:37.806834 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-01 22:50:37.806844 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-01 22:50:37.806855 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-01 22:50:37.806866 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-01 22:50:37.806877 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-01 22:50:37.806887 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-01 22:50:37.806898 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-01 22:50:37.806909 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-01 22:50:37.806928 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-01 22:50:37.806939 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-01 22:50:37.806950 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.806961 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.806972 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.806983 | orchestrator | 2025-06-01 22:50:37.806994 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-01 22:50:37.807005 | orchestrator | Sunday 01 June 2025 22:47:51 +0000 (0:00:56.026) 0:01:36.483 *********** 2025-06-01 22:50:37.807015 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.807026 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.807037 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.807048 | orchestrator | 2025-06-01 22:50:37.807058 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-01 22:50:37.807069 | orchestrator | Sunday 01 June 2025 22:47:52 +0000 (0:00:00.396) 0:01:36.879 *********** 2025-06-01 22:50:37.807080 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.807091 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.807101 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.807112 | orchestrator | 2025-06-01 22:50:37.807123 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-01 22:50:37.807134 | orchestrator | Sunday 01 June 2025 22:47:53 +0000 (0:00:01.168) 0:01:38.048 *********** 2025-06-01 22:50:37.807144 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.807155 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.807166 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.807176 | orchestrator | 2025-06-01 22:50:37.807187 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-01 22:50:37.807198 | orchestrator | Sunday 01 June 2025 22:47:54 +0000 (0:00:01.257) 0:01:39.305 *********** 2025-06-01 22:50:37.807209 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.807219 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.807230 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.807240 | orchestrator | 2025-06-01 22:50:37.807251 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-01 22:50:37.807262 | orchestrator | Sunday 01 June 2025 22:48:08 +0000 (0:00:14.102) 0:01:53.407 *********** 2025-06-01 22:50:37.807273 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.807283 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.807294 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.807305 | orchestrator | 2025-06-01 22:50:37.807316 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-01 22:50:37.807327 | orchestrator | Sunday 01 June 2025 22:48:09 +0000 (0:00:00.667) 0:01:54.074 *********** 2025-06-01 22:50:37.807337 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.807348 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.807359 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.807369 | orchestrator | 2025-06-01 22:50:37.807380 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-01 22:50:37.807391 | orchestrator | Sunday 01 June 2025 22:48:10 +0000 (0:00:00.601) 0:01:54.676 *********** 2025-06-01 22:50:37.807402 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.807413 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.807423 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.807434 | orchestrator | 2025-06-01 22:50:37.807453 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-01 22:50:37.807464 | orchestrator | Sunday 01 June 2025 22:48:10 +0000 (0:00:00.612) 0:01:55.288 *********** 2025-06-01 22:50:37.807475 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.807486 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.807506 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.807517 | orchestrator | 2025-06-01 22:50:37.807528 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-01 22:50:37.807544 | orchestrator | Sunday 01 June 2025 22:48:11 +0000 (0:00:00.822) 0:01:56.110 *********** 2025-06-01 22:50:37.807555 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.807566 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.807577 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.807588 | orchestrator | 2025-06-01 22:50:37.807599 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-01 22:50:37.807609 | orchestrator | Sunday 01 June 2025 22:48:11 +0000 (0:00:00.285) 0:01:56.395 *********** 2025-06-01 22:50:37.807620 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.807631 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.807666 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.807677 | orchestrator | 2025-06-01 22:50:37.807688 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-01 22:50:37.807699 | orchestrator | Sunday 01 June 2025 22:48:12 +0000 (0:00:00.617) 0:01:57.013 *********** 2025-06-01 22:50:37.807709 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.807720 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.807731 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.807742 | orchestrator | 2025-06-01 22:50:37.807753 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-01 22:50:37.807764 | orchestrator | Sunday 01 June 2025 22:48:13 +0000 (0:00:00.591) 0:01:57.605 *********** 2025-06-01 22:50:37.807774 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.807785 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.807796 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.807807 | orchestrator | 2025-06-01 22:50:37.807818 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-01 22:50:37.807829 | orchestrator | Sunday 01 June 2025 22:48:14 +0000 (0:00:01.084) 0:01:58.689 *********** 2025-06-01 22:50:37.807840 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:50:37.807850 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:50:37.807861 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:50:37.807872 | orchestrator | 2025-06-01 22:50:37.807883 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-01 22:50:37.807894 | orchestrator | Sunday 01 June 2025 22:48:14 +0000 (0:00:00.773) 0:01:59.462 *********** 2025-06-01 22:50:37.807905 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.807915 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.807926 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.807937 | orchestrator | 2025-06-01 22:50:37.807948 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-01 22:50:37.807959 | orchestrator | Sunday 01 June 2025 22:48:15 +0000 (0:00:00.287) 0:01:59.750 *********** 2025-06-01 22:50:37.807970 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.807980 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.807991 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.808002 | orchestrator | 2025-06-01 22:50:37.808013 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-01 22:50:37.808024 | orchestrator | Sunday 01 June 2025 22:48:15 +0000 (0:00:00.260) 0:02:00.010 *********** 2025-06-01 22:50:37.808035 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.808046 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.808057 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.808067 | orchestrator | 2025-06-01 22:50:37.808078 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-01 22:50:37.808089 | orchestrator | Sunday 01 June 2025 22:48:16 +0000 (0:00:00.853) 0:02:00.864 *********** 2025-06-01 22:50:37.808100 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.808111 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.808121 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.808132 | orchestrator | 2025-06-01 22:50:37.808144 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-01 22:50:37.808167 | orchestrator | Sunday 01 June 2025 22:48:16 +0000 (0:00:00.619) 0:02:01.483 *********** 2025-06-01 22:50:37.808179 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-01 22:50:37.808190 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-01 22:50:37.808201 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-01 22:50:37.808212 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-01 22:50:37.808223 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-01 22:50:37.808234 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-01 22:50:37.808245 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-01 22:50:37.808256 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-01 22:50:37.808267 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-01 22:50:37.808277 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-01 22:50:37.808289 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-01 22:50:37.808299 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-01 22:50:37.808317 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-01 22:50:37.808328 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-01 22:50:37.808339 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-01 22:50:37.808350 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-01 22:50:37.808361 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-01 22:50:37.808372 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-01 22:50:37.808382 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-01 22:50:37.809138 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-01 22:50:37.809159 | orchestrator | 2025-06-01 22:50:37.809173 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-01 22:50:37.809186 | orchestrator | 2025-06-01 22:50:37.809198 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-01 22:50:37.809211 | orchestrator | Sunday 01 June 2025 22:48:20 +0000 (0:00:03.081) 0:02:04.565 *********** 2025-06-01 22:50:37.809223 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:50:37.809236 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:50:37.809249 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:50:37.809262 | orchestrator | 2025-06-01 22:50:37.809274 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-01 22:50:37.809287 | orchestrator | Sunday 01 June 2025 22:48:20 +0000 (0:00:00.514) 0:02:05.080 *********** 2025-06-01 22:50:37.809299 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:50:37.809312 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:50:37.809324 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:50:37.809337 | orchestrator | 2025-06-01 22:50:37.809350 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-01 22:50:37.809362 | orchestrator | Sunday 01 June 2025 22:48:22 +0000 (0:00:01.542) 0:02:06.622 *********** 2025-06-01 22:50:37.809375 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:50:37.809400 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:50:37.809413 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:50:37.809425 | orchestrator | 2025-06-01 22:50:37.809438 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-01 22:50:37.809450 | orchestrator | Sunday 01 June 2025 22:48:22 +0000 (0:00:00.299) 0:02:06.921 *********** 2025-06-01 22:50:37.809463 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:50:37.809474 | orchestrator | 2025-06-01 22:50:37.809485 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-01 22:50:37.809496 | orchestrator | Sunday 01 June 2025 22:48:23 +0000 (0:00:00.653) 0:02:07.575 *********** 2025-06-01 22:50:37.809506 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.809518 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.809528 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.809539 | orchestrator | 2025-06-01 22:50:37.809550 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-01 22:50:37.809561 | orchestrator | Sunday 01 June 2025 22:48:23 +0000 (0:00:00.297) 0:02:07.872 *********** 2025-06-01 22:50:37.809571 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.809582 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.809593 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.809603 | orchestrator | 2025-06-01 22:50:37.809614 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-01 22:50:37.809625 | orchestrator | Sunday 01 June 2025 22:48:23 +0000 (0:00:00.316) 0:02:08.188 *********** 2025-06-01 22:50:37.809686 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.809700 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.809711 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.809722 | orchestrator | 2025-06-01 22:50:37.809734 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-01 22:50:37.809745 | orchestrator | Sunday 01 June 2025 22:48:23 +0000 (0:00:00.273) 0:02:08.462 *********** 2025-06-01 22:50:37.809756 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:50:37.809767 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:50:37.809778 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:50:37.809790 | orchestrator | 2025-06-01 22:50:37.809801 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-01 22:50:37.809812 | orchestrator | Sunday 01 June 2025 22:48:25 +0000 (0:00:01.750) 0:02:10.213 *********** 2025-06-01 22:50:37.809823 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:50:37.809835 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:50:37.809846 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:50:37.809857 | orchestrator | 2025-06-01 22:50:37.809868 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-01 22:50:37.809879 | orchestrator | 2025-06-01 22:50:37.809890 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-01 22:50:37.809901 | orchestrator | Sunday 01 June 2025 22:48:34 +0000 (0:00:08.864) 0:02:19.077 *********** 2025-06-01 22:50:37.809913 | orchestrator | ok: [testbed-manager] 2025-06-01 22:50:37.809924 | orchestrator | 2025-06-01 22:50:37.809935 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-01 22:50:37.809952 | orchestrator | Sunday 01 June 2025 22:48:35 +0000 (0:00:00.805) 0:02:19.882 *********** 2025-06-01 22:50:37.809964 | orchestrator | changed: [testbed-manager] 2025-06-01 22:50:37.809975 | orchestrator | 2025-06-01 22:50:37.809986 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-01 22:50:37.809998 | orchestrator | Sunday 01 June 2025 22:48:35 +0000 (0:00:00.420) 0:02:20.303 *********** 2025-06-01 22:50:37.810009 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-01 22:50:37.810078 | orchestrator | 2025-06-01 22:50:37.810102 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-01 22:50:37.810114 | orchestrator | Sunday 01 June 2025 22:48:36 +0000 (0:00:01.022) 0:02:21.325 *********** 2025-06-01 22:50:37.810136 | orchestrator | changed: [testbed-manager] 2025-06-01 22:50:37.810148 | orchestrator | 2025-06-01 22:50:37.810159 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-01 22:50:37.810171 | orchestrator | Sunday 01 June 2025 22:48:37 +0000 (0:00:00.848) 0:02:22.173 *********** 2025-06-01 22:50:37.810182 | orchestrator | changed: [testbed-manager] 2025-06-01 22:50:37.810193 | orchestrator | 2025-06-01 22:50:37.810205 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-01 22:50:37.810216 | orchestrator | Sunday 01 June 2025 22:48:38 +0000 (0:00:00.605) 0:02:22.779 *********** 2025-06-01 22:50:37.810228 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 22:50:37.810239 | orchestrator | 2025-06-01 22:50:37.810251 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-01 22:50:37.810262 | orchestrator | Sunday 01 June 2025 22:48:39 +0000 (0:00:01.668) 0:02:24.448 *********** 2025-06-01 22:50:37.810273 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 22:50:37.810285 | orchestrator | 2025-06-01 22:50:37.810311 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-01 22:50:37.810332 | orchestrator | Sunday 01 June 2025 22:48:40 +0000 (0:00:00.843) 0:02:25.291 *********** 2025-06-01 22:50:37.810343 | orchestrator | changed: [testbed-manager] 2025-06-01 22:50:37.810354 | orchestrator | 2025-06-01 22:50:37.810365 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-01 22:50:37.810376 | orchestrator | Sunday 01 June 2025 22:48:41 +0000 (0:00:00.428) 0:02:25.720 *********** 2025-06-01 22:50:37.810386 | orchestrator | changed: [testbed-manager] 2025-06-01 22:50:37.810397 | orchestrator | 2025-06-01 22:50:37.810407 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-01 22:50:37.810418 | orchestrator | 2025-06-01 22:50:37.810429 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-01 22:50:37.810439 | orchestrator | Sunday 01 June 2025 22:48:41 +0000 (0:00:00.442) 0:02:26.162 *********** 2025-06-01 22:50:37.810450 | orchestrator | ok: [testbed-manager] 2025-06-01 22:50:37.810460 | orchestrator | 2025-06-01 22:50:37.810471 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-01 22:50:37.810482 | orchestrator | Sunday 01 June 2025 22:48:41 +0000 (0:00:00.154) 0:02:26.316 *********** 2025-06-01 22:50:37.810493 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 22:50:37.810504 | orchestrator | 2025-06-01 22:50:37.810515 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-01 22:50:37.810526 | orchestrator | Sunday 01 June 2025 22:48:42 +0000 (0:00:00.221) 0:02:26.538 *********** 2025-06-01 22:50:37.810537 | orchestrator | ok: [testbed-manager] 2025-06-01 22:50:37.810547 | orchestrator | 2025-06-01 22:50:37.810558 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-01 22:50:37.810569 | orchestrator | Sunday 01 June 2025 22:48:43 +0000 (0:00:01.001) 0:02:27.539 *********** 2025-06-01 22:50:37.810579 | orchestrator | ok: [testbed-manager] 2025-06-01 22:50:37.810594 | orchestrator | 2025-06-01 22:50:37.810697 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-01 22:50:37.810719 | orchestrator | Sunday 01 June 2025 22:48:44 +0000 (0:00:01.225) 0:02:28.765 *********** 2025-06-01 22:50:37.810738 | orchestrator | changed: [testbed-manager] 2025-06-01 22:50:37.810751 | orchestrator | 2025-06-01 22:50:37.810762 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-01 22:50:37.810773 | orchestrator | Sunday 01 June 2025 22:48:44 +0000 (0:00:00.706) 0:02:29.471 *********** 2025-06-01 22:50:37.810783 | orchestrator | ok: [testbed-manager] 2025-06-01 22:50:37.810794 | orchestrator | 2025-06-01 22:50:37.810804 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-01 22:50:37.810814 | orchestrator | Sunday 01 June 2025 22:48:45 +0000 (0:00:00.400) 0:02:29.871 *********** 2025-06-01 22:50:37.810832 | orchestrator | changed: [testbed-manager] 2025-06-01 22:50:37.810842 | orchestrator | 2025-06-01 22:50:37.810851 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-01 22:50:37.810861 | orchestrator | Sunday 01 June 2025 22:48:51 +0000 (0:00:06.035) 0:02:35.907 *********** 2025-06-01 22:50:37.810870 | orchestrator | changed: [testbed-manager] 2025-06-01 22:50:37.810880 | orchestrator | 2025-06-01 22:50:37.810889 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-01 22:50:37.810899 | orchestrator | Sunday 01 June 2025 22:49:02 +0000 (0:00:10.996) 0:02:46.904 *********** 2025-06-01 22:50:37.810908 | orchestrator | ok: [testbed-manager] 2025-06-01 22:50:37.810918 | orchestrator | 2025-06-01 22:50:37.810927 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-01 22:50:37.810937 | orchestrator | 2025-06-01 22:50:37.810946 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-01 22:50:37.810956 | orchestrator | Sunday 01 June 2025 22:49:02 +0000 (0:00:00.450) 0:02:47.354 *********** 2025-06-01 22:50:37.810965 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.810975 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.810984 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.810993 | orchestrator | 2025-06-01 22:50:37.811003 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-01 22:50:37.811013 | orchestrator | Sunday 01 June 2025 22:49:03 +0000 (0:00:00.427) 0:02:47.782 *********** 2025-06-01 22:50:37.811022 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.811031 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.811047 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.811057 | orchestrator | 2025-06-01 22:50:37.811066 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-01 22:50:37.811076 | orchestrator | Sunday 01 June 2025 22:49:03 +0000 (0:00:00.262) 0:02:48.044 *********** 2025-06-01 22:50:37.811085 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:50:37.811095 | orchestrator | 2025-06-01 22:50:37.811104 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-01 22:50:37.811122 | orchestrator | Sunday 01 June 2025 22:49:04 +0000 (0:00:00.492) 0:02:48.537 *********** 2025-06-01 22:50:37.811132 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 22:50:37.811142 | orchestrator | 2025-06-01 22:50:37.811151 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-01 22:50:37.811161 | orchestrator | Sunday 01 June 2025 22:49:04 +0000 (0:00:00.942) 0:02:49.479 *********** 2025-06-01 22:50:37.811171 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 22:50:37.811180 | orchestrator | 2025-06-01 22:50:37.811190 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-01 22:50:37.811199 | orchestrator | Sunday 01 June 2025 22:49:05 +0000 (0:00:00.969) 0:02:50.449 *********** 2025-06-01 22:50:37.811209 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.811218 | orchestrator | 2025-06-01 22:50:37.811228 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-01 22:50:37.811237 | orchestrator | Sunday 01 June 2025 22:49:06 +0000 (0:00:00.508) 0:02:50.957 *********** 2025-06-01 22:50:37.811247 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 22:50:37.811256 | orchestrator | 2025-06-01 22:50:37.811265 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-01 22:50:37.811275 | orchestrator | Sunday 01 June 2025 22:49:07 +0000 (0:00:01.005) 0:02:51.962 *********** 2025-06-01 22:50:37.811284 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.811294 | orchestrator | 2025-06-01 22:50:37.811303 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-01 22:50:37.811313 | orchestrator | Sunday 01 June 2025 22:49:07 +0000 (0:00:00.280) 0:02:52.242 *********** 2025-06-01 22:50:37.811322 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.811332 | orchestrator | 2025-06-01 22:50:37.811347 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-01 22:50:37.811357 | orchestrator | Sunday 01 June 2025 22:49:07 +0000 (0:00:00.186) 0:02:52.429 *********** 2025-06-01 22:50:37.811366 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.811376 | orchestrator | 2025-06-01 22:50:37.811385 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-01 22:50:37.811395 | orchestrator | Sunday 01 June 2025 22:49:08 +0000 (0:00:00.197) 0:02:52.627 *********** 2025-06-01 22:50:37.811404 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.811414 | orchestrator | 2025-06-01 22:50:37.811423 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-01 22:50:37.811433 | orchestrator | Sunday 01 June 2025 22:49:08 +0000 (0:00:00.210) 0:02:52.838 *********** 2025-06-01 22:50:37.811442 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 22:50:37.811452 | orchestrator | 2025-06-01 22:50:37.811461 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-01 22:50:37.811471 | orchestrator | Sunday 01 June 2025 22:49:13 +0000 (0:00:05.556) 0:02:58.394 *********** 2025-06-01 22:50:37.811480 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-01 22:50:37.811490 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-01 22:50:37.811500 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-01 22:50:37.811510 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-01 22:50:37.811519 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-01 22:50:37.811529 | orchestrator | 2025-06-01 22:50:37.811538 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-01 22:50:37.811547 | orchestrator | Sunday 01 June 2025 22:50:06 +0000 (0:00:52.972) 0:03:51.367 *********** 2025-06-01 22:50:37.811557 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 22:50:37.811566 | orchestrator | 2025-06-01 22:50:37.811576 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-01 22:50:37.811585 | orchestrator | Sunday 01 June 2025 22:50:08 +0000 (0:00:01.350) 0:03:52.717 *********** 2025-06-01 22:50:37.811595 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 22:50:37.811604 | orchestrator | 2025-06-01 22:50:37.811614 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-01 22:50:37.811623 | orchestrator | Sunday 01 June 2025 22:50:09 +0000 (0:00:01.391) 0:03:54.109 *********** 2025-06-01 22:50:37.811647 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 22:50:37.811658 | orchestrator | 2025-06-01 22:50:37.811667 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-01 22:50:37.811677 | orchestrator | Sunday 01 June 2025 22:50:10 +0000 (0:00:01.224) 0:03:55.334 *********** 2025-06-01 22:50:37.811686 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.811696 | orchestrator | 2025-06-01 22:50:37.811705 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-01 22:50:37.811715 | orchestrator | Sunday 01 June 2025 22:50:10 +0000 (0:00:00.176) 0:03:55.510 *********** 2025-06-01 22:50:37.811724 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-01 22:50:37.811734 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-01 22:50:37.811744 | orchestrator | 2025-06-01 22:50:37.811753 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-01 22:50:37.811762 | orchestrator | Sunday 01 June 2025 22:50:13 +0000 (0:00:02.149) 0:03:57.660 *********** 2025-06-01 22:50:37.811777 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.811786 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.811796 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.811805 | orchestrator | 2025-06-01 22:50:37.811815 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-01 22:50:37.811830 | orchestrator | Sunday 01 June 2025 22:50:13 +0000 (0:00:00.324) 0:03:57.985 *********** 2025-06-01 22:50:37.811840 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.811849 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.811859 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.811868 | orchestrator | 2025-06-01 22:50:37.811883 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-01 22:50:37.811893 | orchestrator | 2025-06-01 22:50:37.811903 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-01 22:50:37.811913 | orchestrator | Sunday 01 June 2025 22:50:14 +0000 (0:00:00.796) 0:03:58.782 *********** 2025-06-01 22:50:37.811922 | orchestrator | ok: [testbed-manager] 2025-06-01 22:50:37.811932 | orchestrator | 2025-06-01 22:50:37.811941 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-01 22:50:37.811951 | orchestrator | Sunday 01 June 2025 22:50:14 +0000 (0:00:00.142) 0:03:58.924 *********** 2025-06-01 22:50:37.811960 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 22:50:37.811970 | orchestrator | 2025-06-01 22:50:37.811979 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-01 22:50:37.811989 | orchestrator | Sunday 01 June 2025 22:50:14 +0000 (0:00:00.416) 0:03:59.341 *********** 2025-06-01 22:50:37.811998 | orchestrator | changed: [testbed-manager] 2025-06-01 22:50:37.812008 | orchestrator | 2025-06-01 22:50:37.812017 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-01 22:50:37.812027 | orchestrator | 2025-06-01 22:50:37.812036 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-01 22:50:37.812046 | orchestrator | Sunday 01 June 2025 22:50:20 +0000 (0:00:06.108) 0:04:05.449 *********** 2025-06-01 22:50:37.812055 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:50:37.812065 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:50:37.812074 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:50:37.812084 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:50:37.812093 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:50:37.812103 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:50:37.812112 | orchestrator | 2025-06-01 22:50:37.812122 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-01 22:50:37.812131 | orchestrator | Sunday 01 June 2025 22:50:21 +0000 (0:00:00.748) 0:04:06.198 *********** 2025-06-01 22:50:37.812141 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-01 22:50:37.812150 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-01 22:50:37.812159 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-01 22:50:37.812169 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-01 22:50:37.812178 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-01 22:50:37.812188 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-01 22:50:37.812197 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-01 22:50:37.812207 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-01 22:50:37.812216 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-01 22:50:37.812225 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-01 22:50:37.812235 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-01 22:50:37.812244 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-01 22:50:37.812254 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-01 22:50:37.812263 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-01 22:50:37.812278 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-01 22:50:37.812288 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-01 22:50:37.812297 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-01 22:50:37.812307 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-01 22:50:37.812316 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-01 22:50:37.812326 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-01 22:50:37.812335 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-01 22:50:37.812345 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-01 22:50:37.812354 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-01 22:50:37.812363 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-01 22:50:37.812373 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-01 22:50:37.812382 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-01 22:50:37.812397 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-01 22:50:37.812406 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-01 22:50:37.812415 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-01 22:50:37.812425 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-01 22:50:37.812434 | orchestrator | 2025-06-01 22:50:37.812449 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-01 22:50:37.812459 | orchestrator | Sunday 01 June 2025 22:50:34 +0000 (0:00:12.445) 0:04:18.644 *********** 2025-06-01 22:50:37.812469 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.812478 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.812488 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.812497 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.812507 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.812516 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.812526 | orchestrator | 2025-06-01 22:50:37.812535 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-01 22:50:37.812545 | orchestrator | Sunday 01 June 2025 22:50:34 +0000 (0:00:00.554) 0:04:19.198 *********** 2025-06-01 22:50:37.812554 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:50:37.812563 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:50:37.812573 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:50:37.812582 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:50:37.812591 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:50:37.812601 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:50:37.812610 | orchestrator | 2025-06-01 22:50:37.812620 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:50:37.812629 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:50:37.812657 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-01 22:50:37.812668 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-01 22:50:37.812679 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-01 22:50:37.812694 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-01 22:50:37.812704 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-01 22:50:37.812714 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-01 22:50:37.812723 | orchestrator | 2025-06-01 22:50:37.812733 | orchestrator | 2025-06-01 22:50:37.812743 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:50:37.812752 | orchestrator | Sunday 01 June 2025 22:50:35 +0000 (0:00:00.525) 0:04:19.724 *********** 2025-06-01 22:50:37.812762 | orchestrator | =============================================================================== 2025-06-01 22:50:37.812771 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.03s 2025-06-01 22:50:37.812781 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 52.97s 2025-06-01 22:50:37.812790 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.10s 2025-06-01 22:50:37.812800 | orchestrator | Manage labels ---------------------------------------------------------- 12.45s 2025-06-01 22:50:37.812809 | orchestrator | kubectl : Install required packages ------------------------------------ 11.00s 2025-06-01 22:50:37.812818 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.86s 2025-06-01 22:50:37.812828 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.21s 2025-06-01 22:50:37.812837 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.11s 2025-06-01 22:50:37.812847 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.04s 2025-06-01 22:50:37.812856 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.56s 2025-06-01 22:50:37.812866 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.08s 2025-06-01 22:50:37.812876 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.65s 2025-06-01 22:50:37.812885 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.56s 2025-06-01 22:50:37.812894 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.24s 2025-06-01 22:50:37.812904 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.15s 2025-06-01 22:50:37.812913 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.92s 2025-06-01 22:50:37.812923 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.75s 2025-06-01 22:50:37.812932 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.74s 2025-06-01 22:50:37.812946 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.67s 2025-06-01 22:50:37.812956 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.56s 2025-06-01 22:50:40.854115 | orchestrator | 2025-06-01 22:50:40 | INFO  | Task 9f3306f5-0f8e-497a-a3b0-0973bd5c67cc is in state STARTED 2025-06-01 22:50:40.860198 | orchestrator | 2025-06-01 22:50:40 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:40.864857 | orchestrator | 2025-06-01 22:50:40 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:40.869746 | orchestrator | 2025-06-01 22:50:40 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:40.878121 | orchestrator | 2025-06-01 22:50:40 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:40.886111 | orchestrator | 2025-06-01 22:50:40 | INFO  | Task 379e48b1-f8c2-4fe7-86c7-9d11058c42e4 is in state STARTED 2025-06-01 22:50:40.886146 | orchestrator | 2025-06-01 22:50:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:43.988307 | orchestrator | 2025-06-01 22:50:43 | INFO  | Task 9f3306f5-0f8e-497a-a3b0-0973bd5c67cc is in state STARTED 2025-06-01 22:50:43.991834 | orchestrator | 2025-06-01 22:50:43 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:43.993986 | orchestrator | 2025-06-01 22:50:43 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:43.998627 | orchestrator | 2025-06-01 22:50:43 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:44.000895 | orchestrator | 2025-06-01 22:50:44 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:44.005089 | orchestrator | 2025-06-01 22:50:44 | INFO  | Task 379e48b1-f8c2-4fe7-86c7-9d11058c42e4 is in state SUCCESS 2025-06-01 22:50:44.005317 | orchestrator | 2025-06-01 22:50:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:47.055844 | orchestrator | 2025-06-01 22:50:47 | INFO  | Task 9f3306f5-0f8e-497a-a3b0-0973bd5c67cc is in state STARTED 2025-06-01 22:50:47.063313 | orchestrator | 2025-06-01 22:50:47 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:47.063349 | orchestrator | 2025-06-01 22:50:47 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:47.066477 | orchestrator | 2025-06-01 22:50:47 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:47.069306 | orchestrator | 2025-06-01 22:50:47 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:47.069358 | orchestrator | 2025-06-01 22:50:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:50.112478 | orchestrator | 2025-06-01 22:50:50 | INFO  | Task 9f3306f5-0f8e-497a-a3b0-0973bd5c67cc is in state SUCCESS 2025-06-01 22:50:50.114944 | orchestrator | 2025-06-01 22:50:50 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:50.117698 | orchestrator | 2025-06-01 22:50:50 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:50.118993 | orchestrator | 2025-06-01 22:50:50 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:50.120668 | orchestrator | 2025-06-01 22:50:50 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:50.120691 | orchestrator | 2025-06-01 22:50:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:53.160977 | orchestrator | 2025-06-01 22:50:53 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:53.163673 | orchestrator | 2025-06-01 22:50:53 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:53.167714 | orchestrator | 2025-06-01 22:50:53 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:53.169553 | orchestrator | 2025-06-01 22:50:53 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:53.170142 | orchestrator | 2025-06-01 22:50:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:56.251093 | orchestrator | 2025-06-01 22:50:56 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:56.254163 | orchestrator | 2025-06-01 22:50:56 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:56.258454 | orchestrator | 2025-06-01 22:50:56 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:56.261937 | orchestrator | 2025-06-01 22:50:56 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:56.262344 | orchestrator | 2025-06-01 22:50:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:50:59.311909 | orchestrator | 2025-06-01 22:50:59 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:50:59.312352 | orchestrator | 2025-06-01 22:50:59 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:50:59.313175 | orchestrator | 2025-06-01 22:50:59 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:50:59.314372 | orchestrator | 2025-06-01 22:50:59 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:50:59.314463 | orchestrator | 2025-06-01 22:50:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:02.368083 | orchestrator | 2025-06-01 22:51:02 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:02.370911 | orchestrator | 2025-06-01 22:51:02 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:02.370950 | orchestrator | 2025-06-01 22:51:02 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:51:02.370962 | orchestrator | 2025-06-01 22:51:02 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:02.370973 | orchestrator | 2025-06-01 22:51:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:05.417127 | orchestrator | 2025-06-01 22:51:05 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:05.420347 | orchestrator | 2025-06-01 22:51:05 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:05.422049 | orchestrator | 2025-06-01 22:51:05 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:51:05.426890 | orchestrator | 2025-06-01 22:51:05 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:05.426911 | orchestrator | 2025-06-01 22:51:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:08.479467 | orchestrator | 2025-06-01 22:51:08 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:08.480027 | orchestrator | 2025-06-01 22:51:08 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:08.482161 | orchestrator | 2025-06-01 22:51:08 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:51:08.483735 | orchestrator | 2025-06-01 22:51:08 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:08.483759 | orchestrator | 2025-06-01 22:51:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:11.511912 | orchestrator | 2025-06-01 22:51:11 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:11.512716 | orchestrator | 2025-06-01 22:51:11 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:11.513995 | orchestrator | 2025-06-01 22:51:11 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:51:11.515561 | orchestrator | 2025-06-01 22:51:11 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:11.515588 | orchestrator | 2025-06-01 22:51:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:14.552890 | orchestrator | 2025-06-01 22:51:14 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:14.553173 | orchestrator | 2025-06-01 22:51:14 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:14.554177 | orchestrator | 2025-06-01 22:51:14 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:51:14.555402 | orchestrator | 2025-06-01 22:51:14 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:14.555427 | orchestrator | 2025-06-01 22:51:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:17.591315 | orchestrator | 2025-06-01 22:51:17 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:17.591604 | orchestrator | 2025-06-01 22:51:17 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:17.592689 | orchestrator | 2025-06-01 22:51:17 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state STARTED 2025-06-01 22:51:17.593530 | orchestrator | 2025-06-01 22:51:17 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:17.593555 | orchestrator | 2025-06-01 22:51:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:20.624128 | orchestrator | 2025-06-01 22:51:20 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:20.624254 | orchestrator | 2025-06-01 22:51:20 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:20.624566 | orchestrator | 2025-06-01 22:51:20 | INFO  | Task 55c3b00c-a71b-486b-a76a-46cfb68c6f88 is in state SUCCESS 2025-06-01 22:51:20.626077 | orchestrator | 2025-06-01 22:51:20.626132 | orchestrator | 2025-06-01 22:51:20.626152 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-01 22:51:20.626174 | orchestrator | 2025-06-01 22:51:20.626195 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-01 22:51:20.626213 | orchestrator | Sunday 01 June 2025 22:50:39 +0000 (0:00:00.146) 0:00:00.146 *********** 2025-06-01 22:51:20.626226 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-01 22:51:20.626237 | orchestrator | 2025-06-01 22:51:20.626248 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-01 22:51:20.626259 | orchestrator | Sunday 01 June 2025 22:50:40 +0000 (0:00:01.103) 0:00:01.249 *********** 2025-06-01 22:51:20.626270 | orchestrator | changed: [testbed-manager] 2025-06-01 22:51:20.626282 | orchestrator | 2025-06-01 22:51:20.626293 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-01 22:51:20.626304 | orchestrator | Sunday 01 June 2025 22:50:42 +0000 (0:00:01.729) 0:00:02.978 *********** 2025-06-01 22:51:20.626315 | orchestrator | changed: [testbed-manager] 2025-06-01 22:51:20.626326 | orchestrator | 2025-06-01 22:51:20.626337 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:51:20.626349 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:51:20.626362 | orchestrator | 2025-06-01 22:51:20.626373 | orchestrator | 2025-06-01 22:51:20.626384 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:51:20.626394 | orchestrator | Sunday 01 June 2025 22:50:42 +0000 (0:00:00.594) 0:00:03.573 *********** 2025-06-01 22:51:20.626405 | orchestrator | =============================================================================== 2025-06-01 22:51:20.626416 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.73s 2025-06-01 22:51:20.626427 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.10s 2025-06-01 22:51:20.626438 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.60s 2025-06-01 22:51:20.626449 | orchestrator | 2025-06-01 22:51:20.626460 | orchestrator | 2025-06-01 22:51:20.626471 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-01 22:51:20.626507 | orchestrator | 2025-06-01 22:51:20.626519 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-01 22:51:20.626530 | orchestrator | Sunday 01 June 2025 22:50:39 +0000 (0:00:00.152) 0:00:00.152 *********** 2025-06-01 22:51:20.626541 | orchestrator | ok: [testbed-manager] 2025-06-01 22:51:20.626552 | orchestrator | 2025-06-01 22:51:20.626563 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-01 22:51:20.626574 | orchestrator | Sunday 01 June 2025 22:50:40 +0000 (0:00:00.873) 0:00:01.025 *********** 2025-06-01 22:51:20.626585 | orchestrator | ok: [testbed-manager] 2025-06-01 22:51:20.626595 | orchestrator | 2025-06-01 22:51:20.626607 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-01 22:51:20.626620 | orchestrator | Sunday 01 June 2025 22:50:40 +0000 (0:00:00.608) 0:00:01.633 *********** 2025-06-01 22:51:20.626677 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-01 22:51:20.626860 | orchestrator | 2025-06-01 22:51:20.626880 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-01 22:51:20.626893 | orchestrator | Sunday 01 June 2025 22:50:41 +0000 (0:00:00.811) 0:00:02.444 *********** 2025-06-01 22:51:20.626906 | orchestrator | changed: [testbed-manager] 2025-06-01 22:51:20.626918 | orchestrator | 2025-06-01 22:51:20.626931 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-01 22:51:20.626944 | orchestrator | Sunday 01 June 2025 22:50:43 +0000 (0:00:01.900) 0:00:04.345 *********** 2025-06-01 22:51:20.626957 | orchestrator | changed: [testbed-manager] 2025-06-01 22:51:20.626969 | orchestrator | 2025-06-01 22:51:20.626981 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-01 22:51:20.626992 | orchestrator | Sunday 01 June 2025 22:50:44 +0000 (0:00:00.712) 0:00:05.057 *********** 2025-06-01 22:51:20.627003 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 22:51:20.627014 | orchestrator | 2025-06-01 22:51:20.627025 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-01 22:51:20.627036 | orchestrator | Sunday 01 June 2025 22:50:45 +0000 (0:00:01.611) 0:00:06.668 *********** 2025-06-01 22:51:20.627047 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 22:51:20.627058 | orchestrator | 2025-06-01 22:51:20.627069 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-01 22:51:20.627080 | orchestrator | Sunday 01 June 2025 22:50:46 +0000 (0:00:00.998) 0:00:07.667 *********** 2025-06-01 22:51:20.627091 | orchestrator | ok: [testbed-manager] 2025-06-01 22:51:20.627102 | orchestrator | 2025-06-01 22:51:20.627113 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-01 22:51:20.627124 | orchestrator | Sunday 01 June 2025 22:50:47 +0000 (0:00:00.436) 0:00:08.103 *********** 2025-06-01 22:51:20.627135 | orchestrator | ok: [testbed-manager] 2025-06-01 22:51:20.627146 | orchestrator | 2025-06-01 22:51:20.627157 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:51:20.627184 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:51:20.627195 | orchestrator | 2025-06-01 22:51:20.627206 | orchestrator | 2025-06-01 22:51:20.627217 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:51:20.627228 | orchestrator | Sunday 01 June 2025 22:50:47 +0000 (0:00:00.347) 0:00:08.451 *********** 2025-06-01 22:51:20.627239 | orchestrator | =============================================================================== 2025-06-01 22:51:20.627258 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.90s 2025-06-01 22:51:20.627277 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.61s 2025-06-01 22:51:20.627305 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.00s 2025-06-01 22:51:20.627342 | orchestrator | Get home directory of operator user ------------------------------------- 0.87s 2025-06-01 22:51:20.627362 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.81s 2025-06-01 22:51:20.627399 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.71s 2025-06-01 22:51:20.627419 | orchestrator | Create .kube directory -------------------------------------------------- 0.61s 2025-06-01 22:51:20.627436 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2025-06-01 22:51:20.627447 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.35s 2025-06-01 22:51:20.627458 | orchestrator | 2025-06-01 22:51:20.627469 | orchestrator | 2025-06-01 22:51:20.627479 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-01 22:51:20.627490 | orchestrator | 2025-06-01 22:51:20.627501 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-01 22:51:20.627512 | orchestrator | Sunday 01 June 2025 22:49:01 +0000 (0:00:00.236) 0:00:00.236 *********** 2025-06-01 22:51:20.627522 | orchestrator | ok: [localhost] => { 2025-06-01 22:51:20.627534 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-01 22:51:20.627545 | orchestrator | } 2025-06-01 22:51:20.627557 | orchestrator | 2025-06-01 22:51:20.627568 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-01 22:51:20.627578 | orchestrator | Sunday 01 June 2025 22:49:01 +0000 (0:00:00.044) 0:00:00.280 *********** 2025-06-01 22:51:20.627590 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-01 22:51:20.627602 | orchestrator | ...ignoring 2025-06-01 22:51:20.627614 | orchestrator | 2025-06-01 22:51:20.627650 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-01 22:51:20.627662 | orchestrator | Sunday 01 June 2025 22:49:04 +0000 (0:00:02.983) 0:00:03.263 *********** 2025-06-01 22:51:20.627673 | orchestrator | skipping: [localhost] 2025-06-01 22:51:20.627684 | orchestrator | 2025-06-01 22:51:20.627694 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-01 22:51:20.627705 | orchestrator | Sunday 01 June 2025 22:49:04 +0000 (0:00:00.276) 0:00:03.540 *********** 2025-06-01 22:51:20.627716 | orchestrator | ok: [localhost] 2025-06-01 22:51:20.627727 | orchestrator | 2025-06-01 22:51:20.627737 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 22:51:20.627748 | orchestrator | 2025-06-01 22:51:20.627759 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 22:51:20.627769 | orchestrator | Sunday 01 June 2025 22:49:05 +0000 (0:00:00.780) 0:00:04.320 *********** 2025-06-01 22:51:20.627780 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:51:20.627791 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:51:20.627802 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:51:20.627812 | orchestrator | 2025-06-01 22:51:20.627823 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 22:51:20.627834 | orchestrator | Sunday 01 June 2025 22:49:06 +0000 (0:00:01.024) 0:00:05.344 *********** 2025-06-01 22:51:20.627845 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-01 22:51:20.627856 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-01 22:51:20.627867 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-01 22:51:20.627877 | orchestrator | 2025-06-01 22:51:20.627888 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-01 22:51:20.627899 | orchestrator | 2025-06-01 22:51:20.627910 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-01 22:51:20.627920 | orchestrator | Sunday 01 June 2025 22:49:06 +0000 (0:00:00.525) 0:00:05.870 *********** 2025-06-01 22:51:20.627931 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:51:20.627942 | orchestrator | 2025-06-01 22:51:20.627953 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-01 22:51:20.627963 | orchestrator | Sunday 01 June 2025 22:49:07 +0000 (0:00:00.901) 0:00:06.771 *********** 2025-06-01 22:51:20.627981 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:51:20.627992 | orchestrator | 2025-06-01 22:51:20.628003 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-01 22:51:20.628014 | orchestrator | Sunday 01 June 2025 22:49:09 +0000 (0:00:01.552) 0:00:08.324 *********** 2025-06-01 22:51:20.628024 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:51:20.628035 | orchestrator | 2025-06-01 22:51:20.628045 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-01 22:51:20.628056 | orchestrator | Sunday 01 June 2025 22:49:09 +0000 (0:00:00.616) 0:00:08.940 *********** 2025-06-01 22:51:20.628066 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:51:20.628077 | orchestrator | 2025-06-01 22:51:20.628088 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-01 22:51:20.628098 | orchestrator | Sunday 01 June 2025 22:49:11 +0000 (0:00:01.926) 0:00:10.867 *********** 2025-06-01 22:51:20.628109 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:51:20.628120 | orchestrator | 2025-06-01 22:51:20.628130 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-01 22:51:20.628147 | orchestrator | Sunday 01 June 2025 22:49:12 +0000 (0:00:01.069) 0:00:11.936 *********** 2025-06-01 22:51:20.628158 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:51:20.628169 | orchestrator | 2025-06-01 22:51:20.628179 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-01 22:51:20.628190 | orchestrator | Sunday 01 June 2025 22:49:13 +0000 (0:00:00.577) 0:00:12.513 *********** 2025-06-01 22:51:20.628201 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:51:20.628212 | orchestrator | 2025-06-01 22:51:20.628222 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-01 22:51:20.628241 | orchestrator | Sunday 01 June 2025 22:49:14 +0000 (0:00:01.267) 0:00:13.780 *********** 2025-06-01 22:51:20.628252 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:51:20.628263 | orchestrator | 2025-06-01 22:51:20.628274 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-01 22:51:20.628284 | orchestrator | Sunday 01 June 2025 22:49:15 +0000 (0:00:00.850) 0:00:14.631 *********** 2025-06-01 22:51:20.628295 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:51:20.628306 | orchestrator | 2025-06-01 22:51:20.628317 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-01 22:51:20.628327 | orchestrator | Sunday 01 June 2025 22:49:15 +0000 (0:00:00.324) 0:00:14.955 *********** 2025-06-01 22:51:20.628338 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:51:20.628349 | orchestrator | 2025-06-01 22:51:20.628360 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-01 22:51:20.628370 | orchestrator | Sunday 01 June 2025 22:49:16 +0000 (0:00:00.306) 0:00:15.262 *********** 2025-06-01 22:51:20.628387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 22:51:20.628411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 22:51:20.628429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 22:51:20.628442 | orchestrator | 2025-06-01 22:51:20.628453 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-01 22:51:20.628464 | orchestrator | Sunday 01 June 2025 22:49:16 +0000 (0:00:00.773) 0:00:16.035 *********** 2025-06-01 22:51:20.628484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 22:51:20.628496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 22:51:20.628515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 22:51:20.628527 | orchestrator | 2025-06-01 22:51:20.628538 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-01 22:51:20.628549 | orchestrator | Sunday 01 June 2025 22:49:18 +0000 (0:00:01.712) 0:00:17.748 *********** 2025-06-01 22:51:20.628559 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-01 22:51:20.628570 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-01 22:51:20.628586 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-01 22:51:20.628597 | orchestrator | 2025-06-01 22:51:20.628608 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-01 22:51:20.628619 | orchestrator | Sunday 01 June 2025 22:49:20 +0000 (0:00:01.490) 0:00:19.238 *********** 2025-06-01 22:51:20.628689 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-01 22:51:20.628701 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-01 22:51:20.628712 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-01 22:51:20.628723 | orchestrator | 2025-06-01 22:51:20.628739 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-01 22:51:20.628750 | orchestrator | Sunday 01 June 2025 22:49:24 +0000 (0:00:04.036) 0:00:23.274 *********** 2025-06-01 22:51:20.628761 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-01 22:51:20.628772 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-01 22:51:20.628783 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-01 22:51:20.628794 | orchestrator | 2025-06-01 22:51:20.628804 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-01 22:51:20.628815 | orchestrator | Sunday 01 June 2025 22:49:26 +0000 (0:00:02.482) 0:00:25.757 *********** 2025-06-01 22:51:20.628826 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-01 22:51:20.628837 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-01 22:51:20.628848 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-01 22:51:20.628866 | orchestrator | 2025-06-01 22:51:20.628877 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-01 22:51:20.628887 | orchestrator | Sunday 01 June 2025 22:49:28 +0000 (0:00:01.795) 0:00:27.552 *********** 2025-06-01 22:51:20.628898 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-01 22:51:20.628909 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-01 22:51:20.628920 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-01 22:51:20.628930 | orchestrator | 2025-06-01 22:51:20.628941 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-01 22:51:20.628952 | orchestrator | Sunday 01 June 2025 22:49:29 +0000 (0:00:01.552) 0:00:29.105 *********** 2025-06-01 22:51:20.628962 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-01 22:51:20.628973 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-01 22:51:20.628984 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-01 22:51:20.628995 | orchestrator | 2025-06-01 22:51:20.629005 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-01 22:51:20.629016 | orchestrator | Sunday 01 June 2025 22:49:31 +0000 (0:00:01.863) 0:00:30.969 *********** 2025-06-01 22:51:20.629026 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:51:20.629037 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:51:20.629048 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:51:20.629058 | orchestrator | 2025-06-01 22:51:20.629068 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-01 22:51:20.629078 | orchestrator | Sunday 01 June 2025 22:49:32 +0000 (0:00:00.456) 0:00:31.426 *********** 2025-06-01 22:51:20.629088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 22:51:20.629112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 22:51:20.629130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 22:51:20.629140 | orchestrator | 2025-06-01 22:51:20.629150 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-01 22:51:20.629159 | orchestrator | Sunday 01 June 2025 22:49:33 +0000 (0:00:01.569) 0:00:32.996 *********** 2025-06-01 22:51:20.629169 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:51:20.629178 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:51:20.629188 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:51:20.629197 | orchestrator | 2025-06-01 22:51:20.629207 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-01 22:51:20.629217 | orchestrator | Sunday 01 June 2025 22:49:34 +0000 (0:00:00.850) 0:00:33.846 *********** 2025-06-01 22:51:20.629226 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:51:20.629236 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:51:20.629245 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:51:20.629255 | orchestrator | 2025-06-01 22:51:20.629265 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-01 22:51:20.629274 | orchestrator | Sunday 01 June 2025 22:49:42 +0000 (0:00:07.789) 0:00:41.636 *********** 2025-06-01 22:51:20.629284 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:51:20.629293 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:51:20.629303 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:51:20.629313 | orchestrator | 2025-06-01 22:51:20.629322 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-01 22:51:20.629332 | orchestrator | 2025-06-01 22:51:20.629341 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-01 22:51:20.629351 | orchestrator | Sunday 01 June 2025 22:49:42 +0000 (0:00:00.375) 0:00:42.011 *********** 2025-06-01 22:51:20.629360 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:51:20.629370 | orchestrator | 2025-06-01 22:51:20.629380 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-01 22:51:20.629389 | orchestrator | Sunday 01 June 2025 22:49:43 +0000 (0:00:00.670) 0:00:42.682 *********** 2025-06-01 22:51:20.629399 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:51:20.629408 | orchestrator | 2025-06-01 22:51:20.629418 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-01 22:51:20.629427 | orchestrator | Sunday 01 June 2025 22:49:43 +0000 (0:00:00.270) 0:00:42.953 *********** 2025-06-01 22:51:20.629437 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:51:20.629446 | orchestrator | 2025-06-01 22:51:20.629456 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-01 22:51:20.629465 | orchestrator | Sunday 01 June 2025 22:49:50 +0000 (0:00:06.960) 0:00:49.914 *********** 2025-06-01 22:51:20.629475 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:51:20.629484 | orchestrator | 2025-06-01 22:51:20.629494 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-01 22:51:20.629509 | orchestrator | 2025-06-01 22:51:20.629519 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-01 22:51:20.629528 | orchestrator | Sunday 01 June 2025 22:50:40 +0000 (0:00:49.603) 0:01:39.517 *********** 2025-06-01 22:51:20.629538 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:51:20.629547 | orchestrator | 2025-06-01 22:51:20.629557 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-01 22:51:20.629567 | orchestrator | Sunday 01 June 2025 22:50:40 +0000 (0:00:00.608) 0:01:40.125 *********** 2025-06-01 22:51:20.629576 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:51:20.629586 | orchestrator | 2025-06-01 22:51:20.629595 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-01 22:51:20.629605 | orchestrator | Sunday 01 June 2025 22:50:41 +0000 (0:00:00.786) 0:01:40.912 *********** 2025-06-01 22:51:20.629614 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:51:20.629624 | orchestrator | 2025-06-01 22:51:20.629650 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-01 22:51:20.629660 | orchestrator | Sunday 01 June 2025 22:50:48 +0000 (0:00:06.947) 0:01:47.859 *********** 2025-06-01 22:51:20.629670 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:51:20.629679 | orchestrator | 2025-06-01 22:51:20.629689 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-01 22:51:20.629698 | orchestrator | 2025-06-01 22:51:20.629708 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-01 22:51:20.629723 | orchestrator | Sunday 01 June 2025 22:50:57 +0000 (0:00:08.906) 0:01:56.766 *********** 2025-06-01 22:51:20.629733 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:51:20.629742 | orchestrator | 2025-06-01 22:51:20.629752 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-01 22:51:20.629762 | orchestrator | Sunday 01 June 2025 22:50:58 +0000 (0:00:00.593) 0:01:57.359 *********** 2025-06-01 22:51:20.629771 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:51:20.629781 | orchestrator | 2025-06-01 22:51:20.629791 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-01 22:51:20.629801 | orchestrator | Sunday 01 June 2025 22:50:58 +0000 (0:00:00.230) 0:01:57.589 *********** 2025-06-01 22:51:20.629810 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:51:20.629819 | orchestrator | 2025-06-01 22:51:20.629829 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-01 22:51:20.629839 | orchestrator | Sunday 01 June 2025 22:51:00 +0000 (0:00:02.005) 0:01:59.594 *********** 2025-06-01 22:51:20.629848 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:51:20.629858 | orchestrator | 2025-06-01 22:51:20.629939 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-01 22:51:20.629958 | orchestrator | 2025-06-01 22:51:20.629968 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-01 22:51:20.629978 | orchestrator | Sunday 01 June 2025 22:51:15 +0000 (0:00:14.743) 0:02:14.337 *********** 2025-06-01 22:51:20.629987 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:51:20.629997 | orchestrator | 2025-06-01 22:51:20.630006 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-01 22:51:20.630060 | orchestrator | Sunday 01 June 2025 22:51:16 +0000 (0:00:00.920) 0:02:15.258 *********** 2025-06-01 22:51:20.630074 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-01 22:51:20.630084 | orchestrator | enable_outward_rabbitmq_True 2025-06-01 22:51:20.630093 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-01 22:51:20.630103 | orchestrator | outward_rabbitmq_restart 2025-06-01 22:51:20.630113 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:51:20.630123 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:51:20.630132 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:51:20.630142 | orchestrator | 2025-06-01 22:51:20.630151 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-01 22:51:20.630168 | orchestrator | skipping: no hosts matched 2025-06-01 22:51:20.630178 | orchestrator | 2025-06-01 22:51:20.630188 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-01 22:51:20.630198 | orchestrator | skipping: no hosts matched 2025-06-01 22:51:20.630207 | orchestrator | 2025-06-01 22:51:20.630217 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-01 22:51:20.630226 | orchestrator | skipping: no hosts matched 2025-06-01 22:51:20.630236 | orchestrator | 2025-06-01 22:51:20.630246 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:51:20.630256 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-01 22:51:20.630266 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-01 22:51:20.630276 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:51:20.630286 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:51:20.630296 | orchestrator | 2025-06-01 22:51:20.630305 | orchestrator | 2025-06-01 22:51:20.630315 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:51:20.630325 | orchestrator | Sunday 01 June 2025 22:51:18 +0000 (0:00:02.285) 0:02:17.544 *********** 2025-06-01 22:51:20.630334 | orchestrator | =============================================================================== 2025-06-01 22:51:20.630344 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 73.25s 2025-06-01 22:51:20.630353 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.91s 2025-06-01 22:51:20.630363 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.79s 2025-06-01 22:51:20.630372 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.04s 2025-06-01 22:51:20.630382 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.98s 2025-06-01 22:51:20.630392 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.48s 2025-06-01 22:51:20.630401 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.29s 2025-06-01 22:51:20.630411 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.93s 2025-06-01 22:51:20.630425 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.87s 2025-06-01 22:51:20.630435 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.86s 2025-06-01 22:51:20.630445 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.80s 2025-06-01 22:51:20.630454 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.71s 2025-06-01 22:51:20.630464 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.57s 2025-06-01 22:51:20.630474 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.55s 2025-06-01 22:51:20.630483 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.55s 2025-06-01 22:51:20.630501 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.49s 2025-06-01 22:51:20.630511 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.29s 2025-06-01 22:51:20.630520 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.27s 2025-06-01 22:51:20.630530 | orchestrator | rabbitmq : Check if running RabbitMQ is at most one version behind ------ 1.07s 2025-06-01 22:51:20.630540 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.02s 2025-06-01 22:51:20.630549 | orchestrator | 2025-06-01 22:51:20 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:20.630565 | orchestrator | 2025-06-01 22:51:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:23.668860 | orchestrator | 2025-06-01 22:51:23 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:23.670508 | orchestrator | 2025-06-01 22:51:23 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:23.672709 | orchestrator | 2025-06-01 22:51:23 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:23.673175 | orchestrator | 2025-06-01 22:51:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:26.724304 | orchestrator | 2025-06-01 22:51:26 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:26.728759 | orchestrator | 2025-06-01 22:51:26 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:26.728818 | orchestrator | 2025-06-01 22:51:26 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:26.728832 | orchestrator | 2025-06-01 22:51:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:29.767449 | orchestrator | 2025-06-01 22:51:29 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:29.768357 | orchestrator | 2025-06-01 22:51:29 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:29.769421 | orchestrator | 2025-06-01 22:51:29 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:29.769473 | orchestrator | 2025-06-01 22:51:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:32.818245 | orchestrator | 2025-06-01 22:51:32 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:32.819351 | orchestrator | 2025-06-01 22:51:32 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:32.820900 | orchestrator | 2025-06-01 22:51:32 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:32.821166 | orchestrator | 2025-06-01 22:51:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:35.862466 | orchestrator | 2025-06-01 22:51:35 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:35.864311 | orchestrator | 2025-06-01 22:51:35 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:35.867503 | orchestrator | 2025-06-01 22:51:35 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:35.867532 | orchestrator | 2025-06-01 22:51:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:38.915821 | orchestrator | 2025-06-01 22:51:38 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:38.917105 | orchestrator | 2025-06-01 22:51:38 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:38.919043 | orchestrator | 2025-06-01 22:51:38 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:38.919150 | orchestrator | 2025-06-01 22:51:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:41.963257 | orchestrator | 2025-06-01 22:51:41 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:41.965395 | orchestrator | 2025-06-01 22:51:41 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:41.965446 | orchestrator | 2025-06-01 22:51:41 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:41.965460 | orchestrator | 2025-06-01 22:51:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:45.018167 | orchestrator | 2025-06-01 22:51:45 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:45.020824 | orchestrator | 2025-06-01 22:51:45 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:45.022291 | orchestrator | 2025-06-01 22:51:45 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:45.022792 | orchestrator | 2025-06-01 22:51:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:48.080501 | orchestrator | 2025-06-01 22:51:48 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:48.081488 | orchestrator | 2025-06-01 22:51:48 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:48.083202 | orchestrator | 2025-06-01 22:51:48 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:48.083230 | orchestrator | 2025-06-01 22:51:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:51.124938 | orchestrator | 2025-06-01 22:51:51 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:51.125202 | orchestrator | 2025-06-01 22:51:51 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:51.126821 | orchestrator | 2025-06-01 22:51:51 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:51.126851 | orchestrator | 2025-06-01 22:51:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:54.180132 | orchestrator | 2025-06-01 22:51:54 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:54.182333 | orchestrator | 2025-06-01 22:51:54 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:54.184149 | orchestrator | 2025-06-01 22:51:54 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:54.184424 | orchestrator | 2025-06-01 22:51:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:51:57.243159 | orchestrator | 2025-06-01 22:51:57 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:51:57.248512 | orchestrator | 2025-06-01 22:51:57 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:51:57.250438 | orchestrator | 2025-06-01 22:51:57 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:51:57.250558 | orchestrator | 2025-06-01 22:51:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:00.294306 | orchestrator | 2025-06-01 22:52:00 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:52:00.296352 | orchestrator | 2025-06-01 22:52:00 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:00.299268 | orchestrator | 2025-06-01 22:52:00 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:00.299881 | orchestrator | 2025-06-01 22:52:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:03.352896 | orchestrator | 2025-06-01 22:52:03 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:52:03.356618 | orchestrator | 2025-06-01 22:52:03 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:03.361613 | orchestrator | 2025-06-01 22:52:03 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:03.361674 | orchestrator | 2025-06-01 22:52:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:06.408501 | orchestrator | 2025-06-01 22:52:06 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:52:06.410613 | orchestrator | 2025-06-01 22:52:06 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:06.410678 | orchestrator | 2025-06-01 22:52:06 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:06.410701 | orchestrator | 2025-06-01 22:52:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:09.455499 | orchestrator | 2025-06-01 22:52:09 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:52:09.455579 | orchestrator | 2025-06-01 22:52:09 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:09.455606 | orchestrator | 2025-06-01 22:52:09 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:09.455614 | orchestrator | 2025-06-01 22:52:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:12.495848 | orchestrator | 2025-06-01 22:52:12 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:52:12.499179 | orchestrator | 2025-06-01 22:52:12 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:12.500595 | orchestrator | 2025-06-01 22:52:12 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:12.500624 | orchestrator | 2025-06-01 22:52:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:15.547699 | orchestrator | 2025-06-01 22:52:15 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:52:15.550935 | orchestrator | 2025-06-01 22:52:15 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:15.551969 | orchestrator | 2025-06-01 22:52:15 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:15.552198 | orchestrator | 2025-06-01 22:52:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:18.592798 | orchestrator | 2025-06-01 22:52:18 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:52:18.594993 | orchestrator | 2025-06-01 22:52:18 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:18.596912 | orchestrator | 2025-06-01 22:52:18 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:18.597166 | orchestrator | 2025-06-01 22:52:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:21.644776 | orchestrator | 2025-06-01 22:52:21 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state STARTED 2025-06-01 22:52:21.645177 | orchestrator | 2025-06-01 22:52:21 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:21.646799 | orchestrator | 2025-06-01 22:52:21 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:21.646832 | orchestrator | 2025-06-01 22:52:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:24.694268 | orchestrator | 2025-06-01 22:52:24 | INFO  | Task 7d0f7e0f-39bf-406f-915d-acb768f06f31 is in state SUCCESS 2025-06-01 22:52:24.696135 | orchestrator | 2025-06-01 22:52:24.696177 | orchestrator | 2025-06-01 22:52:24.696190 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 22:52:24.696202 | orchestrator | 2025-06-01 22:52:24.696214 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 22:52:24.696226 | orchestrator | Sunday 01 June 2025 22:50:00 +0000 (0:00:00.177) 0:00:00.177 *********** 2025-06-01 22:52:24.696237 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.696250 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.696357 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.696373 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:52:24.696384 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:52:24.696395 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:52:24.696406 | orchestrator | 2025-06-01 22:52:24.696417 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 22:52:24.696428 | orchestrator | Sunday 01 June 2025 22:50:01 +0000 (0:00:00.783) 0:00:00.961 *********** 2025-06-01 22:52:24.696440 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-01 22:52:24.696451 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-01 22:52:24.696844 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-01 22:52:24.696860 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-01 22:52:24.696872 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-01 22:52:24.696883 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-01 22:52:24.696894 | orchestrator | 2025-06-01 22:52:24.696905 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-01 22:52:24.696916 | orchestrator | 2025-06-01 22:52:24.696927 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-01 22:52:24.696938 | orchestrator | Sunday 01 June 2025 22:50:02 +0000 (0:00:00.941) 0:00:01.902 *********** 2025-06-01 22:52:24.696951 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:52:24.696963 | orchestrator | 2025-06-01 22:52:24.696974 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-01 22:52:24.696985 | orchestrator | Sunday 01 June 2025 22:50:03 +0000 (0:00:01.120) 0:00:03.022 *********** 2025-06-01 22:52:24.697014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697082 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697132 | orchestrator | 2025-06-01 22:52:24.697143 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-01 22:52:24.697155 | orchestrator | Sunday 01 June 2025 22:50:04 +0000 (0:00:01.398) 0:00:04.421 *********** 2025-06-01 22:52:24.697167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697243 | orchestrator | 2025-06-01 22:52:24.697255 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-01 22:52:24.697266 | orchestrator | Sunday 01 June 2025 22:50:07 +0000 (0:00:02.916) 0:00:07.337 *********** 2025-06-01 22:52:24.697278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697332 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697344 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697355 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697367 | orchestrator | 2025-06-01 22:52:24.697378 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-01 22:52:24.697390 | orchestrator | Sunday 01 June 2025 22:50:09 +0000 (0:00:01.970) 0:00:09.307 *********** 2025-06-01 22:52:24.697406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697449 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697484 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697497 | orchestrator | 2025-06-01 22:52:24.697510 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-01 22:52:24.697523 | orchestrator | Sunday 01 June 2025 22:50:11 +0000 (0:00:01.881) 0:00:11.188 *********** 2025-06-01 22:52:24.697536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697586 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.697686 | orchestrator | 2025-06-01 22:52:24.697707 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-01 22:52:24.697725 | orchestrator | Sunday 01 June 2025 22:50:12 +0000 (0:00:01.461) 0:00:12.650 *********** 2025-06-01 22:52:24.697742 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:52:24.697756 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:52:24.697769 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:52:24.697782 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:52:24.697794 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:52:24.697807 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:52:24.697819 | orchestrator | 2025-06-01 22:52:24.697831 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-01 22:52:24.697842 | orchestrator | Sunday 01 June 2025 22:50:15 +0000 (0:00:02.425) 0:00:15.076 *********** 2025-06-01 22:52:24.697853 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-01 22:52:24.697863 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-01 22:52:24.697874 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-01 22:52:24.697891 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-01 22:52:24.697903 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-01 22:52:24.697914 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-01 22:52:24.697924 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 22:52:24.697935 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 22:52:24.697945 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 22:52:24.697956 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 22:52:24.697967 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 22:52:24.697977 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 22:52:24.697988 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 22:52:24.698001 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 22:52:24.698012 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 22:52:24.698072 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 22:52:24.698092 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 22:52:24.698110 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 22:52:24.698142 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 22:52:24.698162 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 22:52:24.698183 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 22:52:24.698227 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 22:52:24.698239 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 22:52:24.698250 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 22:52:24.698261 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 22:52:24.698272 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 22:52:24.698282 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 22:52:24.698293 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 22:52:24.698304 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 22:52:24.698314 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 22:52:24.698325 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 22:52:24.698336 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 22:52:24.698347 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 22:52:24.698358 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 22:52:24.698369 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 22:52:24.698379 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-01 22:52:24.698390 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 22:52:24.698401 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-01 22:52:24.698412 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-01 22:52:24.698423 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-01 22:52:24.698443 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-01 22:52:24.698454 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-01 22:52:24.698466 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-01 22:52:24.698477 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-01 22:52:24.698489 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-01 22:52:24.698500 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-01 22:52:24.698511 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-01 22:52:24.698529 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-01 22:52:24.698541 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-01 22:52:24.698552 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-01 22:52:24.698563 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-01 22:52:24.698574 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-01 22:52:24.698585 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-01 22:52:24.698595 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-01 22:52:24.698606 | orchestrator | 2025-06-01 22:52:24.698617 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 22:52:24.698628 | orchestrator | Sunday 01 June 2025 22:50:34 +0000 (0:00:19.520) 0:00:34.596 *********** 2025-06-01 22:52:24.698675 | orchestrator | 2025-06-01 22:52:24.698693 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 22:52:24.698704 | orchestrator | Sunday 01 June 2025 22:50:35 +0000 (0:00:00.066) 0:00:34.663 *********** 2025-06-01 22:52:24.698715 | orchestrator | 2025-06-01 22:52:24.698726 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 22:52:24.698737 | orchestrator | Sunday 01 June 2025 22:50:35 +0000 (0:00:00.063) 0:00:34.726 *********** 2025-06-01 22:52:24.698747 | orchestrator | 2025-06-01 22:52:24.698758 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 22:52:24.698769 | orchestrator | Sunday 01 June 2025 22:50:35 +0000 (0:00:00.087) 0:00:34.814 *********** 2025-06-01 22:52:24.698780 | orchestrator | 2025-06-01 22:52:24.698791 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 22:52:24.698802 | orchestrator | Sunday 01 June 2025 22:50:35 +0000 (0:00:00.065) 0:00:34.880 *********** 2025-06-01 22:52:24.698813 | orchestrator | 2025-06-01 22:52:24.698823 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 22:52:24.698834 | orchestrator | Sunday 01 June 2025 22:50:35 +0000 (0:00:00.065) 0:00:34.945 *********** 2025-06-01 22:52:24.698845 | orchestrator | 2025-06-01 22:52:24.698856 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-01 22:52:24.698866 | orchestrator | Sunday 01 June 2025 22:50:35 +0000 (0:00:00.061) 0:00:35.006 *********** 2025-06-01 22:52:24.698879 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.698898 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.698918 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:52:24.698936 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.698948 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:52:24.698958 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:52:24.698969 | orchestrator | 2025-06-01 22:52:24.698980 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-01 22:52:24.698991 | orchestrator | Sunday 01 June 2025 22:50:37 +0000 (0:00:01.924) 0:00:36.931 *********** 2025-06-01 22:52:24.699002 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:52:24.699013 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:52:24.699023 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:52:24.699034 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:52:24.699044 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:52:24.699055 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:52:24.699068 | orchestrator | 2025-06-01 22:52:24.699086 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-01 22:52:24.699106 | orchestrator | 2025-06-01 22:52:24.699125 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-01 22:52:24.699148 | orchestrator | Sunday 01 June 2025 22:51:12 +0000 (0:00:34.805) 0:01:11.736 *********** 2025-06-01 22:52:24.699159 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:52:24.699170 | orchestrator | 2025-06-01 22:52:24.699181 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-01 22:52:24.699192 | orchestrator | Sunday 01 June 2025 22:51:12 +0000 (0:00:00.582) 0:01:12.319 *********** 2025-06-01 22:52:24.699202 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:52:24.699213 | orchestrator | 2025-06-01 22:52:24.699233 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-01 22:52:24.699244 | orchestrator | Sunday 01 June 2025 22:51:13 +0000 (0:00:00.720) 0:01:13.040 *********** 2025-06-01 22:52:24.699255 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.699266 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.699277 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.699288 | orchestrator | 2025-06-01 22:52:24.699299 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-01 22:52:24.699310 | orchestrator | Sunday 01 June 2025 22:51:14 +0000 (0:00:00.808) 0:01:13.848 *********** 2025-06-01 22:52:24.699320 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.699331 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.699342 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.699353 | orchestrator | 2025-06-01 22:52:24.699364 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-01 22:52:24.699375 | orchestrator | Sunday 01 June 2025 22:51:14 +0000 (0:00:00.349) 0:01:14.198 *********** 2025-06-01 22:52:24.699386 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.699396 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.699407 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.699418 | orchestrator | 2025-06-01 22:52:24.699429 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-01 22:52:24.699440 | orchestrator | Sunday 01 June 2025 22:51:14 +0000 (0:00:00.301) 0:01:14.499 *********** 2025-06-01 22:52:24.699450 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.699461 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.699472 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.699483 | orchestrator | 2025-06-01 22:52:24.699493 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-01 22:52:24.699504 | orchestrator | Sunday 01 June 2025 22:51:15 +0000 (0:00:00.659) 0:01:15.158 *********** 2025-06-01 22:52:24.699515 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.699526 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.699537 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.699547 | orchestrator | 2025-06-01 22:52:24.699558 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-01 22:52:24.699569 | orchestrator | Sunday 01 June 2025 22:51:15 +0000 (0:00:00.374) 0:01:15.533 *********** 2025-06-01 22:52:24.699580 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.699591 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.699602 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.699613 | orchestrator | 2025-06-01 22:52:24.699624 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-01 22:52:24.699698 | orchestrator | Sunday 01 June 2025 22:51:16 +0000 (0:00:00.350) 0:01:15.883 *********** 2025-06-01 22:52:24.699721 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.699734 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.699744 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.699755 | orchestrator | 2025-06-01 22:52:24.699766 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-01 22:52:24.699788 | orchestrator | Sunday 01 June 2025 22:51:16 +0000 (0:00:00.317) 0:01:16.201 *********** 2025-06-01 22:52:24.699800 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.699818 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.699829 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.699839 | orchestrator | 2025-06-01 22:52:24.699850 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-01 22:52:24.699861 | orchestrator | Sunday 01 June 2025 22:51:17 +0000 (0:00:00.521) 0:01:16.722 *********** 2025-06-01 22:52:24.699872 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.699883 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.699893 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.699904 | orchestrator | 2025-06-01 22:52:24.699915 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-01 22:52:24.699926 | orchestrator | Sunday 01 June 2025 22:51:17 +0000 (0:00:00.302) 0:01:17.024 *********** 2025-06-01 22:52:24.699937 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.699947 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.699958 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.699968 | orchestrator | 2025-06-01 22:52:24.699979 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-01 22:52:24.699990 | orchestrator | Sunday 01 June 2025 22:51:17 +0000 (0:00:00.311) 0:01:17.335 *********** 2025-06-01 22:52:24.700000 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.700011 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.700022 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.700032 | orchestrator | 2025-06-01 22:52:24.700043 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-01 22:52:24.700054 | orchestrator | Sunday 01 June 2025 22:51:17 +0000 (0:00:00.297) 0:01:17.633 *********** 2025-06-01 22:52:24.700065 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.700075 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.700086 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.700096 | orchestrator | 2025-06-01 22:52:24.700107 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-01 22:52:24.700118 | orchestrator | Sunday 01 June 2025 22:51:18 +0000 (0:00:00.507) 0:01:18.140 *********** 2025-06-01 22:52:24.700129 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.700139 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.700150 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.700161 | orchestrator | 2025-06-01 22:52:24.700171 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-01 22:52:24.700182 | orchestrator | Sunday 01 June 2025 22:51:18 +0000 (0:00:00.303) 0:01:18.443 *********** 2025-06-01 22:52:24.700193 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.700203 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.700214 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.700225 | orchestrator | 2025-06-01 22:52:24.700235 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-01 22:52:24.700246 | orchestrator | Sunday 01 June 2025 22:51:19 +0000 (0:00:00.341) 0:01:18.784 *********** 2025-06-01 22:52:24.700257 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.700268 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.700279 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.700290 | orchestrator | 2025-06-01 22:52:24.700307 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-01 22:52:24.700318 | orchestrator | Sunday 01 June 2025 22:51:19 +0000 (0:00:00.324) 0:01:19.109 *********** 2025-06-01 22:52:24.700329 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.700340 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.700351 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.700362 | orchestrator | 2025-06-01 22:52:24.700372 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-01 22:52:24.700383 | orchestrator | Sunday 01 June 2025 22:51:19 +0000 (0:00:00.527) 0:01:19.636 *********** 2025-06-01 22:52:24.700394 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.700405 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.700422 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.700433 | orchestrator | 2025-06-01 22:52:24.700443 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-01 22:52:24.700454 | orchestrator | Sunday 01 June 2025 22:51:20 +0000 (0:00:00.366) 0:01:20.003 *********** 2025-06-01 22:52:24.700465 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:52:24.700476 | orchestrator | 2025-06-01 22:52:24.700487 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-01 22:52:24.700497 | orchestrator | Sunday 01 June 2025 22:51:20 +0000 (0:00:00.622) 0:01:20.626 *********** 2025-06-01 22:52:24.700508 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.700518 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.700532 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.700551 | orchestrator | 2025-06-01 22:52:24.700570 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-01 22:52:24.700584 | orchestrator | Sunday 01 June 2025 22:51:21 +0000 (0:00:00.910) 0:01:21.536 *********** 2025-06-01 22:52:24.700595 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.700606 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.700617 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.700628 | orchestrator | 2025-06-01 22:52:24.700667 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-01 22:52:24.700679 | orchestrator | Sunday 01 June 2025 22:51:22 +0000 (0:00:00.448) 0:01:21.985 *********** 2025-06-01 22:52:24.700689 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.700700 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.700711 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.700722 | orchestrator | 2025-06-01 22:52:24.700733 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-01 22:52:24.700743 | orchestrator | Sunday 01 June 2025 22:51:22 +0000 (0:00:00.437) 0:01:22.422 *********** 2025-06-01 22:52:24.700754 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.700765 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.700775 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.700786 | orchestrator | 2025-06-01 22:52:24.700802 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-01 22:52:24.700814 | orchestrator | Sunday 01 June 2025 22:51:23 +0000 (0:00:00.327) 0:01:22.750 *********** 2025-06-01 22:52:24.700825 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.700835 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.700846 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.700857 | orchestrator | 2025-06-01 22:52:24.700868 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-01 22:52:24.700878 | orchestrator | Sunday 01 June 2025 22:51:23 +0000 (0:00:00.628) 0:01:23.378 *********** 2025-06-01 22:52:24.700889 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.700900 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.700911 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.700921 | orchestrator | 2025-06-01 22:52:24.700932 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-01 22:52:24.700943 | orchestrator | Sunday 01 June 2025 22:51:24 +0000 (0:00:00.347) 0:01:23.725 *********** 2025-06-01 22:52:24.700954 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.700965 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.700975 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.700986 | orchestrator | 2025-06-01 22:52:24.700997 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-01 22:52:24.701008 | orchestrator | Sunday 01 June 2025 22:51:24 +0000 (0:00:00.396) 0:01:24.121 *********** 2025-06-01 22:52:24.701018 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.701029 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.701040 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.701058 | orchestrator | 2025-06-01 22:52:24.701069 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-01 22:52:24.701079 | orchestrator | Sunday 01 June 2025 22:51:24 +0000 (0:00:00.329) 0:01:24.451 *********** 2025-06-01 22:52:24.701091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701218 | orchestrator | 2025-06-01 22:52:24.701229 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-01 22:52:24.701240 | orchestrator | Sunday 01 June 2025 22:51:26 +0000 (0:00:01.629) 0:01:26.080 *********** 2025-06-01 22:52:24.701251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701370 | orchestrator | 2025-06-01 22:52:24.701381 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-01 22:52:24.701392 | orchestrator | Sunday 01 June 2025 22:51:29 +0000 (0:00:03.483) 0:01:29.564 *********** 2025-06-01 22:52:24.701403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.701720 | orchestrator | 2025-06-01 22:52:24.701731 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 22:52:24.701743 | orchestrator | Sunday 01 June 2025 22:51:32 +0000 (0:00:02.156) 0:01:31.720 *********** 2025-06-01 22:52:24.701754 | orchestrator | 2025-06-01 22:52:24.701764 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 22:52:24.701775 | orchestrator | Sunday 01 June 2025 22:51:32 +0000 (0:00:00.065) 0:01:31.786 *********** 2025-06-01 22:52:24.701786 | orchestrator | 2025-06-01 22:52:24.701796 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 22:52:24.701807 | orchestrator | Sunday 01 June 2025 22:51:32 +0000 (0:00:00.063) 0:01:31.849 *********** 2025-06-01 22:52:24.701818 | orchestrator | 2025-06-01 22:52:24.701829 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-01 22:52:24.701840 | orchestrator | Sunday 01 June 2025 22:51:32 +0000 (0:00:00.066) 0:01:31.916 *********** 2025-06-01 22:52:24.701851 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:52:24.701862 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:52:24.701872 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:52:24.701883 | orchestrator | 2025-06-01 22:52:24.701894 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-01 22:52:24.701905 | orchestrator | Sunday 01 June 2025 22:51:34 +0000 (0:00:02.381) 0:01:34.298 *********** 2025-06-01 22:52:24.701916 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:52:24.701926 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:52:24.701937 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:52:24.701948 | orchestrator | 2025-06-01 22:52:24.701959 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-01 22:52:24.701969 | orchestrator | Sunday 01 June 2025 22:51:37 +0000 (0:00:02.926) 0:01:37.224 *********** 2025-06-01 22:52:24.701980 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:52:24.701991 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:52:24.702002 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:52:24.702012 | orchestrator | 2025-06-01 22:52:24.702067 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-01 22:52:24.702079 | orchestrator | Sunday 01 June 2025 22:51:45 +0000 (0:00:07.612) 0:01:44.837 *********** 2025-06-01 22:52:24.702089 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.702100 | orchestrator | 2025-06-01 22:52:24.702111 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-01 22:52:24.702122 | orchestrator | Sunday 01 June 2025 22:51:45 +0000 (0:00:00.129) 0:01:44.967 *********** 2025-06-01 22:52:24.702133 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.702144 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.702154 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.702165 | orchestrator | 2025-06-01 22:52:24.702185 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-01 22:52:24.702196 | orchestrator | Sunday 01 June 2025 22:51:46 +0000 (0:00:00.814) 0:01:45.781 *********** 2025-06-01 22:52:24.702207 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.702218 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.702229 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:52:24.702239 | orchestrator | 2025-06-01 22:52:24.702250 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-01 22:52:24.702261 | orchestrator | Sunday 01 June 2025 22:51:47 +0000 (0:00:00.987) 0:01:46.769 *********** 2025-06-01 22:52:24.702274 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.702287 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.702300 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.702312 | orchestrator | 2025-06-01 22:52:24.702324 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-01 22:52:24.702344 | orchestrator | Sunday 01 June 2025 22:51:47 +0000 (0:00:00.789) 0:01:47.560 *********** 2025-06-01 22:52:24.702357 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.702370 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.702382 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:52:24.702394 | orchestrator | 2025-06-01 22:52:24.702406 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-01 22:52:24.702419 | orchestrator | Sunday 01 June 2025 22:51:48 +0000 (0:00:00.619) 0:01:48.180 *********** 2025-06-01 22:52:24.702431 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.702443 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.702455 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.702467 | orchestrator | 2025-06-01 22:52:24.702479 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-01 22:52:24.702492 | orchestrator | Sunday 01 June 2025 22:51:49 +0000 (0:00:00.684) 0:01:48.864 *********** 2025-06-01 22:52:24.702505 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.702517 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.702529 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.702541 | orchestrator | 2025-06-01 22:52:24.702553 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-01 22:52:24.702566 | orchestrator | Sunday 01 June 2025 22:51:50 +0000 (0:00:01.378) 0:01:50.243 *********** 2025-06-01 22:52:24.702579 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.702591 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.702603 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.702616 | orchestrator | 2025-06-01 22:52:24.702627 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-01 22:52:24.702702 | orchestrator | Sunday 01 June 2025 22:51:50 +0000 (0:00:00.351) 0:01:50.594 *********** 2025-06-01 22:52:24.702797 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.702820 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.702833 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.702844 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.702856 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.702867 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.702896 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.702907 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.702917 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.702927 | orchestrator | 2025-06-01 22:52:24.702937 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-01 22:52:24.702946 | orchestrator | Sunday 01 June 2025 22:51:52 +0000 (0:00:01.530) 0:01:52.125 *********** 2025-06-01 22:52:24.702956 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.702971 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.702981 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.702991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703001 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703044 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703064 | orchestrator | 2025-06-01 22:52:24.703074 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-01 22:52:24.703084 | orchestrator | Sunday 01 June 2025 22:51:56 +0000 (0:00:04.235) 0:01:56.360 *********** 2025-06-01 22:52:24.703094 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703103 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703118 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703138 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703182 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 22:52:24.703202 | orchestrator | 2025-06-01 22:52:24.703212 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 22:52:24.703221 | orchestrator | Sunday 01 June 2025 22:51:59 +0000 (0:00:02.770) 0:01:59.131 *********** 2025-06-01 22:52:24.703231 | orchestrator | 2025-06-01 22:52:24.703241 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 22:52:24.703250 | orchestrator | Sunday 01 June 2025 22:51:59 +0000 (0:00:00.064) 0:01:59.196 *********** 2025-06-01 22:52:24.703260 | orchestrator | 2025-06-01 22:52:24.703270 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 22:52:24.703280 | orchestrator | Sunday 01 June 2025 22:51:59 +0000 (0:00:00.062) 0:01:59.258 *********** 2025-06-01 22:52:24.703289 | orchestrator | 2025-06-01 22:52:24.703299 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-01 22:52:24.703309 | orchestrator | Sunday 01 June 2025 22:51:59 +0000 (0:00:00.062) 0:01:59.320 *********** 2025-06-01 22:52:24.703318 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:52:24.703328 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:52:24.703338 | orchestrator | 2025-06-01 22:52:24.703347 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-01 22:52:24.703357 | orchestrator | Sunday 01 June 2025 22:52:05 +0000 (0:00:06.097) 0:02:05.417 *********** 2025-06-01 22:52:24.703366 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:52:24.703376 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:52:24.703386 | orchestrator | 2025-06-01 22:52:24.703396 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-01 22:52:24.703405 | orchestrator | Sunday 01 June 2025 22:52:12 +0000 (0:00:06.330) 0:02:11.748 *********** 2025-06-01 22:52:24.703415 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:52:24.703424 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:52:24.703434 | orchestrator | 2025-06-01 22:52:24.703448 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-01 22:52:24.703458 | orchestrator | Sunday 01 June 2025 22:52:18 +0000 (0:00:06.324) 0:02:18.072 *********** 2025-06-01 22:52:24.703468 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:52:24.703483 | orchestrator | 2025-06-01 22:52:24.703493 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-01 22:52:24.703503 | orchestrator | Sunday 01 June 2025 22:52:18 +0000 (0:00:00.114) 0:02:18.187 *********** 2025-06-01 22:52:24.703513 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.703522 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.703532 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.703542 | orchestrator | 2025-06-01 22:52:24.703552 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-01 22:52:24.703561 | orchestrator | Sunday 01 June 2025 22:52:19 +0000 (0:00:01.008) 0:02:19.195 *********** 2025-06-01 22:52:24.703571 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.703581 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.703591 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:52:24.703600 | orchestrator | 2025-06-01 22:52:24.703610 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-01 22:52:24.703620 | orchestrator | Sunday 01 June 2025 22:52:20 +0000 (0:00:00.610) 0:02:19.805 *********** 2025-06-01 22:52:24.703629 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.703666 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.703683 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.703699 | orchestrator | 2025-06-01 22:52:24.703711 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-01 22:52:24.703721 | orchestrator | Sunday 01 June 2025 22:52:20 +0000 (0:00:00.754) 0:02:20.559 *********** 2025-06-01 22:52:24.703731 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:52:24.703740 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:52:24.703750 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:52:24.703759 | orchestrator | 2025-06-01 22:52:24.703769 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-01 22:52:24.703778 | orchestrator | Sunday 01 June 2025 22:52:21 +0000 (0:00:00.664) 0:02:21.224 *********** 2025-06-01 22:52:24.703788 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.703797 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.703807 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.703816 | orchestrator | 2025-06-01 22:52:24.703826 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-01 22:52:24.703836 | orchestrator | Sunday 01 June 2025 22:52:22 +0000 (0:00:01.065) 0:02:22.289 *********** 2025-06-01 22:52:24.703845 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:52:24.703855 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:52:24.703864 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:52:24.703874 | orchestrator | 2025-06-01 22:52:24.703883 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:52:24.703893 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-01 22:52:24.703904 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-01 22:52:24.703920 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-01 22:52:24.703931 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:52:24.703941 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:52:24.703950 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:52:24.703960 | orchestrator | 2025-06-01 22:52:24.703970 | orchestrator | 2025-06-01 22:52:24.703979 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:52:24.703989 | orchestrator | Sunday 01 June 2025 22:52:23 +0000 (0:00:00.965) 0:02:23.254 *********** 2025-06-01 22:52:24.704005 | orchestrator | =============================================================================== 2025-06-01 22:52:24.704014 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.81s 2025-06-01 22:52:24.704024 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.52s 2025-06-01 22:52:24.704034 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.94s 2025-06-01 22:52:24.704050 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.26s 2025-06-01 22:52:24.704066 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.48s 2025-06-01 22:52:24.704083 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.24s 2025-06-01 22:52:24.704099 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.48s 2025-06-01 22:52:24.704115 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.92s 2025-06-01 22:52:24.704132 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.77s 2025-06-01 22:52:24.704144 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.43s 2025-06-01 22:52:24.704153 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.16s 2025-06-01 22:52:24.704163 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.97s 2025-06-01 22:52:24.704172 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.93s 2025-06-01 22:52:24.704188 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.88s 2025-06-01 22:52:24.704197 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.63s 2025-06-01 22:52:24.704207 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.53s 2025-06-01 22:52:24.704217 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.46s 2025-06-01 22:52:24.704226 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.40s 2025-06-01 22:52:24.704236 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.38s 2025-06-01 22:52:24.704245 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.12s 2025-06-01 22:52:24.704255 | orchestrator | 2025-06-01 22:52:24 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:24.704265 | orchestrator | 2025-06-01 22:52:24 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:24.704274 | orchestrator | 2025-06-01 22:52:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:27.745828 | orchestrator | 2025-06-01 22:52:27 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:27.746994 | orchestrator | 2025-06-01 22:52:27 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:27.747028 | orchestrator | 2025-06-01 22:52:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:30.796043 | orchestrator | 2025-06-01 22:52:30 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:30.797432 | orchestrator | 2025-06-01 22:52:30 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:30.797464 | orchestrator | 2025-06-01 22:52:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:33.836029 | orchestrator | 2025-06-01 22:52:33 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:33.837118 | orchestrator | 2025-06-01 22:52:33 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:33.837150 | orchestrator | 2025-06-01 22:52:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:36.889325 | orchestrator | 2025-06-01 22:52:36 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:36.890975 | orchestrator | 2025-06-01 22:52:36 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:36.891124 | orchestrator | 2025-06-01 22:52:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:39.939176 | orchestrator | 2025-06-01 22:52:39 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:39.940378 | orchestrator | 2025-06-01 22:52:39 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:39.940418 | orchestrator | 2025-06-01 22:52:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:42.986722 | orchestrator | 2025-06-01 22:52:42 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:42.987566 | orchestrator | 2025-06-01 22:52:42 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:42.987597 | orchestrator | 2025-06-01 22:52:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:46.036393 | orchestrator | 2025-06-01 22:52:46 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:46.036502 | orchestrator | 2025-06-01 22:52:46 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:46.036525 | orchestrator | 2025-06-01 22:52:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:49.098303 | orchestrator | 2025-06-01 22:52:49 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:49.098437 | orchestrator | 2025-06-01 22:52:49 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:49.098455 | orchestrator | 2025-06-01 22:52:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:52.137631 | orchestrator | 2025-06-01 22:52:52 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:52.138867 | orchestrator | 2025-06-01 22:52:52 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:52.138899 | orchestrator | 2025-06-01 22:52:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:55.187982 | orchestrator | 2025-06-01 22:52:55 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:55.191329 | orchestrator | 2025-06-01 22:52:55 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:55.191388 | orchestrator | 2025-06-01 22:52:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:52:58.237818 | orchestrator | 2025-06-01 22:52:58 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:52:58.239261 | orchestrator | 2025-06-01 22:52:58 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:52:58.239298 | orchestrator | 2025-06-01 22:52:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:01.285770 | orchestrator | 2025-06-01 22:53:01 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:01.292085 | orchestrator | 2025-06-01 22:53:01 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:01.292133 | orchestrator | 2025-06-01 22:53:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:04.345194 | orchestrator | 2025-06-01 22:53:04 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:04.345297 | orchestrator | 2025-06-01 22:53:04 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:04.345326 | orchestrator | 2025-06-01 22:53:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:07.401569 | orchestrator | 2025-06-01 22:53:07 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:07.404193 | orchestrator | 2025-06-01 22:53:07 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:07.404235 | orchestrator | 2025-06-01 22:53:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:10.448186 | orchestrator | 2025-06-01 22:53:10 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:10.448984 | orchestrator | 2025-06-01 22:53:10 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:10.449000 | orchestrator | 2025-06-01 22:53:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:13.500532 | orchestrator | 2025-06-01 22:53:13 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:13.500637 | orchestrator | 2025-06-01 22:53:13 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:13.500692 | orchestrator | 2025-06-01 22:53:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:16.564569 | orchestrator | 2025-06-01 22:53:16 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:16.567334 | orchestrator | 2025-06-01 22:53:16 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:16.568232 | orchestrator | 2025-06-01 22:53:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:19.608434 | orchestrator | 2025-06-01 22:53:19 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:19.610716 | orchestrator | 2025-06-01 22:53:19 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:19.610792 | orchestrator | 2025-06-01 22:53:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:22.665530 | orchestrator | 2025-06-01 22:53:22 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:22.665722 | orchestrator | 2025-06-01 22:53:22 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:22.665749 | orchestrator | 2025-06-01 22:53:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:25.730504 | orchestrator | 2025-06-01 22:53:25 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:25.735635 | orchestrator | 2025-06-01 22:53:25 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:25.735706 | orchestrator | 2025-06-01 22:53:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:28.788005 | orchestrator | 2025-06-01 22:53:28 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:28.788574 | orchestrator | 2025-06-01 22:53:28 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:28.788604 | orchestrator | 2025-06-01 22:53:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:31.828717 | orchestrator | 2025-06-01 22:53:31 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:31.829644 | orchestrator | 2025-06-01 22:53:31 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:31.829721 | orchestrator | 2025-06-01 22:53:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:34.876000 | orchestrator | 2025-06-01 22:53:34 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:34.883455 | orchestrator | 2025-06-01 22:53:34 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:34.883527 | orchestrator | 2025-06-01 22:53:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:37.915112 | orchestrator | 2025-06-01 22:53:37 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:37.916771 | orchestrator | 2025-06-01 22:53:37 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:37.917060 | orchestrator | 2025-06-01 22:53:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:40.968973 | orchestrator | 2025-06-01 22:53:40 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:40.972115 | orchestrator | 2025-06-01 22:53:40 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:40.972211 | orchestrator | 2025-06-01 22:53:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:44.031585 | orchestrator | 2025-06-01 22:53:44 | INFO  | Task 821c4117-31fd-4be7-af3c-915c47298bdf is in state STARTED 2025-06-01 22:53:44.033592 | orchestrator | 2025-06-01 22:53:44 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:44.035512 | orchestrator | 2025-06-01 22:53:44 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:44.035549 | orchestrator | 2025-06-01 22:53:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:47.085502 | orchestrator | 2025-06-01 22:53:47 | INFO  | Task 821c4117-31fd-4be7-af3c-915c47298bdf is in state STARTED 2025-06-01 22:53:47.085603 | orchestrator | 2025-06-01 22:53:47 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:47.086164 | orchestrator | 2025-06-01 22:53:47 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:47.086194 | orchestrator | 2025-06-01 22:53:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:50.137716 | orchestrator | 2025-06-01 22:53:50 | INFO  | Task 821c4117-31fd-4be7-af3c-915c47298bdf is in state STARTED 2025-06-01 22:53:50.138699 | orchestrator | 2025-06-01 22:53:50 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:50.142354 | orchestrator | 2025-06-01 22:53:50 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:50.143791 | orchestrator | 2025-06-01 22:53:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:53.200492 | orchestrator | 2025-06-01 22:53:53 | INFO  | Task 821c4117-31fd-4be7-af3c-915c47298bdf is in state STARTED 2025-06-01 22:53:53.203730 | orchestrator | 2025-06-01 22:53:53 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:53.205403 | orchestrator | 2025-06-01 22:53:53 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:53.205441 | orchestrator | 2025-06-01 22:53:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:56.242167 | orchestrator | 2025-06-01 22:53:56 | INFO  | Task 821c4117-31fd-4be7-af3c-915c47298bdf is in state STARTED 2025-06-01 22:53:56.242816 | orchestrator | 2025-06-01 22:53:56 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:56.244325 | orchestrator | 2025-06-01 22:53:56 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:56.247000 | orchestrator | 2025-06-01 22:53:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:53:59.295503 | orchestrator | 2025-06-01 22:53:59 | INFO  | Task 821c4117-31fd-4be7-af3c-915c47298bdf is in state STARTED 2025-06-01 22:53:59.296126 | orchestrator | 2025-06-01 22:53:59 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:53:59.297158 | orchestrator | 2025-06-01 22:53:59 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:53:59.297685 | orchestrator | 2025-06-01 22:53:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:02.363328 | orchestrator | 2025-06-01 22:54:02 | INFO  | Task 821c4117-31fd-4be7-af3c-915c47298bdf is in state SUCCESS 2025-06-01 22:54:02.364525 | orchestrator | 2025-06-01 22:54:02 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:02.367858 | orchestrator | 2025-06-01 22:54:02 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:02.367886 | orchestrator | 2025-06-01 22:54:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:05.427562 | orchestrator | 2025-06-01 22:54:05 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:05.427643 | orchestrator | 2025-06-01 22:54:05 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:05.427706 | orchestrator | 2025-06-01 22:54:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:08.474947 | orchestrator | 2025-06-01 22:54:08 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:08.475057 | orchestrator | 2025-06-01 22:54:08 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:08.475072 | orchestrator | 2025-06-01 22:54:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:11.511069 | orchestrator | 2025-06-01 22:54:11 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:11.511318 | orchestrator | 2025-06-01 22:54:11 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:11.511425 | orchestrator | 2025-06-01 22:54:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:14.555443 | orchestrator | 2025-06-01 22:54:14 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:14.557572 | orchestrator | 2025-06-01 22:54:14 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:14.557608 | orchestrator | 2025-06-01 22:54:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:17.607276 | orchestrator | 2025-06-01 22:54:17 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:17.608894 | orchestrator | 2025-06-01 22:54:17 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:17.608935 | orchestrator | 2025-06-01 22:54:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:20.655234 | orchestrator | 2025-06-01 22:54:20 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:20.656706 | orchestrator | 2025-06-01 22:54:20 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:20.656747 | orchestrator | 2025-06-01 22:54:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:23.695289 | orchestrator | 2025-06-01 22:54:23 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:23.697411 | orchestrator | 2025-06-01 22:54:23 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:23.697446 | orchestrator | 2025-06-01 22:54:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:26.744157 | orchestrator | 2025-06-01 22:54:26 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:26.745226 | orchestrator | 2025-06-01 22:54:26 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:26.745303 | orchestrator | 2025-06-01 22:54:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:29.788196 | orchestrator | 2025-06-01 22:54:29 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:29.789436 | orchestrator | 2025-06-01 22:54:29 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:29.789770 | orchestrator | 2025-06-01 22:54:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:32.833761 | orchestrator | 2025-06-01 22:54:32 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:32.835047 | orchestrator | 2025-06-01 22:54:32 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:32.835085 | orchestrator | 2025-06-01 22:54:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:35.879133 | orchestrator | 2025-06-01 22:54:35 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:35.881107 | orchestrator | 2025-06-01 22:54:35 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:35.881641 | orchestrator | 2025-06-01 22:54:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:38.926197 | orchestrator | 2025-06-01 22:54:38 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:38.928460 | orchestrator | 2025-06-01 22:54:38 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:38.928512 | orchestrator | 2025-06-01 22:54:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:41.968612 | orchestrator | 2025-06-01 22:54:41 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:41.970539 | orchestrator | 2025-06-01 22:54:41 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:41.970954 | orchestrator | 2025-06-01 22:54:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:45.027148 | orchestrator | 2025-06-01 22:54:45 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:45.027348 | orchestrator | 2025-06-01 22:54:45 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:45.027372 | orchestrator | 2025-06-01 22:54:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:48.077163 | orchestrator | 2025-06-01 22:54:48 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:48.079005 | orchestrator | 2025-06-01 22:54:48 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:48.079038 | orchestrator | 2025-06-01 22:54:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:51.134843 | orchestrator | 2025-06-01 22:54:51 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:51.136983 | orchestrator | 2025-06-01 22:54:51 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:51.137014 | orchestrator | 2025-06-01 22:54:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:54.183772 | orchestrator | 2025-06-01 22:54:54 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:54.185264 | orchestrator | 2025-06-01 22:54:54 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:54.185296 | orchestrator | 2025-06-01 22:54:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:54:57.234470 | orchestrator | 2025-06-01 22:54:57 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:54:57.234610 | orchestrator | 2025-06-01 22:54:57 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:54:57.234627 | orchestrator | 2025-06-01 22:54:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:00.279964 | orchestrator | 2025-06-01 22:55:00 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:55:00.281173 | orchestrator | 2025-06-01 22:55:00 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:00.282554 | orchestrator | 2025-06-01 22:55:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:03.356174 | orchestrator | 2025-06-01 22:55:03 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:55:03.360313 | orchestrator | 2025-06-01 22:55:03 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:03.360357 | orchestrator | 2025-06-01 22:55:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:06.411926 | orchestrator | 2025-06-01 22:55:06 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:55:06.414305 | orchestrator | 2025-06-01 22:55:06 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:06.414691 | orchestrator | 2025-06-01 22:55:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:09.456529 | orchestrator | 2025-06-01 22:55:09 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:55:09.458285 | orchestrator | 2025-06-01 22:55:09 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:09.458319 | orchestrator | 2025-06-01 22:55:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:12.515282 | orchestrator | 2025-06-01 22:55:12 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state STARTED 2025-06-01 22:55:12.516404 | orchestrator | 2025-06-01 22:55:12 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:12.516435 | orchestrator | 2025-06-01 22:55:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:15.569424 | orchestrator | 2025-06-01 22:55:15 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:15.590201 | orchestrator | 2025-06-01 22:55:15 | INFO  | Task 6dd6e14d-0025-41c5-acd2-393664d55190 is in state SUCCESS 2025-06-01 22:55:15.595995 | orchestrator | 2025-06-01 22:55:15.596066 | orchestrator | None 2025-06-01 22:55:15.596140 | orchestrator | 2025-06-01 22:55:15.596153 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 22:55:15.596165 | orchestrator | 2025-06-01 22:55:15.596183 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 22:55:15.596196 | orchestrator | Sunday 01 June 2025 22:48:43 +0000 (0:00:00.564) 0:00:00.564 *********** 2025-06-01 22:55:15.596207 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.596219 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.596230 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.596240 | orchestrator | 2025-06-01 22:55:15.596251 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 22:55:15.596262 | orchestrator | Sunday 01 June 2025 22:48:43 +0000 (0:00:00.503) 0:00:01.067 *********** 2025-06-01 22:55:15.596274 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-01 22:55:15.596285 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-01 22:55:15.596295 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-01 22:55:15.596306 | orchestrator | 2025-06-01 22:55:15.596318 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-01 22:55:15.596350 | orchestrator | 2025-06-01 22:55:15.596361 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-01 22:55:15.596412 | orchestrator | Sunday 01 June 2025 22:48:44 +0000 (0:00:00.459) 0:00:01.526 *********** 2025-06-01 22:55:15.596424 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.596435 | orchestrator | 2025-06-01 22:55:15.596446 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-01 22:55:15.596457 | orchestrator | Sunday 01 June 2025 22:48:45 +0000 (0:00:00.856) 0:00:02.382 *********** 2025-06-01 22:55:15.596468 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.596479 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.596490 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.596500 | orchestrator | 2025-06-01 22:55:15.596511 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-01 22:55:15.596522 | orchestrator | Sunday 01 June 2025 22:48:46 +0000 (0:00:00.919) 0:00:03.302 *********** 2025-06-01 22:55:15.596566 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.596580 | orchestrator | 2025-06-01 22:55:15.596593 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-01 22:55:15.596606 | orchestrator | Sunday 01 June 2025 22:48:48 +0000 (0:00:02.195) 0:00:05.498 *********** 2025-06-01 22:55:15.596618 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.596631 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.596643 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.596656 | orchestrator | 2025-06-01 22:55:15.597284 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-01 22:55:15.597398 | orchestrator | Sunday 01 June 2025 22:48:49 +0000 (0:00:01.059) 0:00:06.558 *********** 2025-06-01 22:55:15.597414 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-01 22:55:15.597427 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-01 22:55:15.597440 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-01 22:55:15.597451 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-01 22:55:15.597462 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-01 22:55:15.597512 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-01 22:55:15.597524 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-01 22:55:15.597583 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-01 22:55:15.597597 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-01 22:55:15.597607 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-01 22:55:15.597649 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-01 22:55:15.597695 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-01 22:55:15.597753 | orchestrator | 2025-06-01 22:55:15.597765 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-01 22:55:15.597776 | orchestrator | Sunday 01 June 2025 22:48:52 +0000 (0:00:03.126) 0:00:09.684 *********** 2025-06-01 22:55:15.597788 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-01 22:55:15.597800 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-01 22:55:15.597811 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-01 22:55:15.597822 | orchestrator | 2025-06-01 22:55:15.597833 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-01 22:55:15.598195 | orchestrator | Sunday 01 June 2025 22:48:53 +0000 (0:00:01.059) 0:00:10.744 *********** 2025-06-01 22:55:15.598235 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-01 22:55:15.598248 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-01 22:55:15.598259 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-01 22:55:15.598270 | orchestrator | 2025-06-01 22:55:15.598281 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-01 22:55:15.598358 | orchestrator | Sunday 01 June 2025 22:48:55 +0000 (0:00:02.133) 0:00:12.877 *********** 2025-06-01 22:55:15.598372 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-01 22:55:15.598385 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.598429 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-01 22:55:15.598441 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.598469 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-01 22:55:15.598500 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.598511 | orchestrator | 2025-06-01 22:55:15.598522 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-01 22:55:15.598533 | orchestrator | Sunday 01 June 2025 22:48:57 +0000 (0:00:01.779) 0:00:14.657 *********** 2025-06-01 22:55:15.598584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.598609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.598638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.598650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.598680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.598720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.598733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 22:55:15.598745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 22:55:15.598756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 22:55:15.598768 | orchestrator | 2025-06-01 22:55:15.598779 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-01 22:55:15.598790 | orchestrator | Sunday 01 June 2025 22:48:59 +0000 (0:00:02.004) 0:00:16.661 *********** 2025-06-01 22:55:15.598801 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.598812 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.598823 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.598834 | orchestrator | 2025-06-01 22:55:15.598845 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-01 22:55:15.598856 | orchestrator | Sunday 01 June 2025 22:49:00 +0000 (0:00:01.148) 0:00:17.810 *********** 2025-06-01 22:55:15.598867 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-01 22:55:15.598878 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-01 22:55:15.598889 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-01 22:55:15.598900 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-01 22:55:15.598947 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-01 22:55:15.598958 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-01 22:55:15.598977 | orchestrator | 2025-06-01 22:55:15.599055 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-01 22:55:15.599066 | orchestrator | Sunday 01 June 2025 22:49:02 +0000 (0:00:01.617) 0:00:19.427 *********** 2025-06-01 22:55:15.599077 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.599088 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.599099 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.599109 | orchestrator | 2025-06-01 22:55:15.599120 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-01 22:55:15.599131 | orchestrator | Sunday 01 June 2025 22:49:04 +0000 (0:00:02.050) 0:00:21.478 *********** 2025-06-01 22:55:15.599142 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.599153 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.599164 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.599175 | orchestrator | 2025-06-01 22:55:15.599185 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-01 22:55:15.599196 | orchestrator | Sunday 01 June 2025 22:49:06 +0000 (0:00:02.350) 0:00:23.828 *********** 2025-06-01 22:55:15.599207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.599235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.599248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.599261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__789aa2b1f18857050f1c5898d18d97e7c5e56ecc', '__omit_place_holder__789aa2b1f18857050f1c5898d18d97e7c5e56ecc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 22:55:15.599272 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.599284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.599303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.599314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.599332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__789aa2b1f18857050f1c5898d18d97e7c5e56ecc', '__omit_place_holder__789aa2b1f18857050f1c5898d18d97e7c5e56ecc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 22:55:15.599349 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.599361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.599372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.599384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.599407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__789aa2b1f18857050f1c5898d18d97e7c5e56ecc', '__omit_place_holder__789aa2b1f18857050f1c5898d18d97e7c5e56ecc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 22:55:15.599418 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.599429 | orchestrator | 2025-06-01 22:55:15.599440 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-01 22:55:15.599451 | orchestrator | Sunday 01 June 2025 22:49:07 +0000 (0:00:00.748) 0:00:24.577 *********** 2025-06-01 22:55:15.599463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.599480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.599504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.599516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.599527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.599546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__789aa2b1f18857050f1c5898d18d97e7c5e56ecc', '__omit_place_holder__789aa2b1f18857050f1c5898d18d97e7c5e56ecc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 22:55:15.599557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.599569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.599593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__789aa2b1f18857050f1c5898d18d97e7c5e56ecc', '__omit_place_holder__789aa2b1f18857050f1c5898d18d97e7c5e56ecc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 22:55:15.599605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.599616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.599634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__789aa2b1f18857050f1c5898d18d97e7c5e56ecc', '__omit_place_holder__789aa2b1f18857050f1c5898d18d97e7c5e56ecc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 22:55:15.599646 | orchestrator | 2025-06-01 22:55:15.599657 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-01 22:55:15.599730 | orchestrator | Sunday 01 June 2025 22:49:13 +0000 (0:00:05.747) 0:00:30.324 *********** 2025-06-01 22:55:15.599744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.599755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.599885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.599901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.599913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.599934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.600132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 22:55:15.600158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 22:55:15.600170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 22:55:15.600181 | orchestrator | 2025-06-01 22:55:15.600192 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-01 22:55:15.600203 | orchestrator | Sunday 01 June 2025 22:49:17 +0000 (0:00:04.159) 0:00:34.484 *********** 2025-06-01 22:55:15.600215 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-01 22:55:15.600235 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-01 22:55:15.600252 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-01 22:55:15.600264 | orchestrator | 2025-06-01 22:55:15.600275 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-01 22:55:15.600286 | orchestrator | Sunday 01 June 2025 22:49:19 +0000 (0:00:01.807) 0:00:36.291 *********** 2025-06-01 22:55:15.600296 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-01 22:55:15.600308 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-01 22:55:15.600327 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-01 22:55:15.600338 | orchestrator | 2025-06-01 22:55:15.600348 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-01 22:55:15.600359 | orchestrator | Sunday 01 June 2025 22:49:24 +0000 (0:00:05.214) 0:00:41.506 *********** 2025-06-01 22:55:15.600370 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.600381 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.600391 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.600402 | orchestrator | 2025-06-01 22:55:15.600413 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-01 22:55:15.600424 | orchestrator | Sunday 01 June 2025 22:49:25 +0000 (0:00:01.332) 0:00:42.839 *********** 2025-06-01 22:55:15.600435 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-01 22:55:15.600448 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-01 22:55:15.600459 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-01 22:55:15.600470 | orchestrator | 2025-06-01 22:55:15.600481 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-01 22:55:15.600504 | orchestrator | Sunday 01 June 2025 22:49:27 +0000 (0:00:02.246) 0:00:45.086 *********** 2025-06-01 22:55:15.600516 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-01 22:55:15.600527 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-01 22:55:15.600539 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-01 22:55:15.600549 | orchestrator | 2025-06-01 22:55:15.600587 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-01 22:55:15.600763 | orchestrator | Sunday 01 June 2025 22:49:29 +0000 (0:00:01.983) 0:00:47.069 *********** 2025-06-01 22:55:15.600779 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-01 22:55:15.600791 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-01 22:55:15.600801 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-01 22:55:15.600812 | orchestrator | 2025-06-01 22:55:15.600823 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-01 22:55:15.600834 | orchestrator | Sunday 01 June 2025 22:49:31 +0000 (0:00:01.939) 0:00:49.008 *********** 2025-06-01 22:55:15.600845 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-01 22:55:15.600856 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-01 22:55:15.600866 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-01 22:55:15.600877 | orchestrator | 2025-06-01 22:55:15.600887 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-01 22:55:15.600898 | orchestrator | Sunday 01 June 2025 22:49:33 +0000 (0:00:01.820) 0:00:50.829 *********** 2025-06-01 22:55:15.600909 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.600919 | orchestrator | 2025-06-01 22:55:15.600930 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-01 22:55:15.600941 | orchestrator | Sunday 01 June 2025 22:49:34 +0000 (0:00:01.042) 0:00:51.871 *********** 2025-06-01 22:55:15.600952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.600991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.601004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.601016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.601041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.601052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.601245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 22:55:15.601265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 22:55:15.601291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 22:55:15.601303 | orchestrator | 2025-06-01 22:55:15.601315 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-01 22:55:15.601326 | orchestrator | Sunday 01 June 2025 22:49:38 +0000 (0:00:03.559) 0:00:55.431 *********** 2025-06-01 22:55:15.601337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.601349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.601360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.601371 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.601383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.601401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.601425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.601438 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.601449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.601461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.601485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.601497 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.601508 | orchestrator | 2025-06-01 22:55:15.601519 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-01 22:55:15.601558 | orchestrator | Sunday 01 June 2025 22:49:39 +0000 (0:00:00.725) 0:00:56.156 *********** 2025-06-01 22:55:15.601571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.601590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.601647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.601661 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.601695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.601707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.601719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.601730 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.601742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.601760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.601771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.601782 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.601793 | orchestrator | 2025-06-01 22:55:15.601898 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-01 22:55:15.601912 | orchestrator | Sunday 01 June 2025 22:49:40 +0000 (0:00:01.272) 0:00:57.429 *********** 2025-06-01 22:55:15.601937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.601949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.601961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.601972 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.601983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.602164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.602178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.602190 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.602201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.602213 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.602224 | orchestrator | 2025-06-01 22:55:15.602235 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-01 22:55:15.602246 | orchestrator | Sunday 01 June 2025 22:49:40 +0000 (0:00:00.565) 0:00:57.995 *********** 2025-06-01 22:55:15.602257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.602322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.602334 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.602345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.602382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.602393 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.602405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.602434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.602446 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.602456 | orchestrator | 2025-06-01 22:55:15.602467 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-01 22:55:15.602478 | orchestrator | Sunday 01 June 2025 22:49:41 +0000 (0:00:00.751) 0:00:58.746 *********** 2025-06-01 22:55:15.602490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.602525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.602537 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.602548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.602579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.602591 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.602602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.602636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.602647 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.602658 | orchestrator | 2025-06-01 22:55:15.602687 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-01 22:55:15.602699 | orchestrator | Sunday 01 June 2025 22:49:43 +0000 (0:00:01.461) 0:01:00.207 *********** 2025-06-01 22:55:15.602710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.602740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.602752 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.602763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.602797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.602809 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.602820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.602850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.602861 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.602871 | orchestrator | 2025-06-01 22:55:15.602882 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-01 22:55:15.602893 | orchestrator | Sunday 01 June 2025 22:49:43 +0000 (0:00:00.701) 0:01:00.909 *********** 2025-06-01 22:55:15.602904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.602939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.602951 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.602962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.602991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.603002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.603013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.603025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.603035 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.603046 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.603057 | orchestrator | 2025-06-01 22:55:15.603068 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-01 22:55:15.603085 | orchestrator | Sunday 01 June 2025 22:49:44 +0000 (0:00:00.933) 0:01:01.842 *********** 2025-06-01 22:55:15.603102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.603121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.603132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.603143 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.603155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.603166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.603178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.603189 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.603212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 22:55:15.603235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 22:55:15.603247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 22:55:15.603258 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.603269 | orchestrator | 2025-06-01 22:55:15.603280 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-01 22:55:15.603291 | orchestrator | Sunday 01 June 2025 22:49:45 +0000 (0:00:01.244) 0:01:03.086 *********** 2025-06-01 22:55:15.603302 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-01 22:55:15.603313 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-01 22:55:15.603324 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-01 22:55:15.603335 | orchestrator | 2025-06-01 22:55:15.603346 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-01 22:55:15.603357 | orchestrator | Sunday 01 June 2025 22:49:47 +0000 (0:00:01.490) 0:01:04.577 *********** 2025-06-01 22:55:15.603368 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-01 22:55:15.603379 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-01 22:55:15.603390 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-01 22:55:15.603401 | orchestrator | 2025-06-01 22:55:15.603412 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-01 22:55:15.603423 | orchestrator | Sunday 01 June 2025 22:49:49 +0000 (0:00:02.022) 0:01:06.599 *********** 2025-06-01 22:55:15.603433 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 22:55:15.603445 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 22:55:15.603456 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 22:55:15.603467 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 22:55:15.603477 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.603488 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 22:55:15.603507 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.603518 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 22:55:15.603529 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.603540 | orchestrator | 2025-06-01 22:55:15.603551 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-01 22:55:15.603562 | orchestrator | Sunday 01 June 2025 22:49:50 +0000 (0:00:01.041) 0:01:07.640 *********** 2025-06-01 22:55:15.603586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.603598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.603610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 22:55:15.603621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.603633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.603644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 22:55:15.603662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 22:55:15.603701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 22:55:15.603714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 22:55:15.603725 | orchestrator | 2025-06-01 22:55:15.603737 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-01 22:55:15.603748 | orchestrator | Sunday 01 June 2025 22:49:53 +0000 (0:00:02.803) 0:01:10.444 *********** 2025-06-01 22:55:15.603759 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.603770 | orchestrator | 2025-06-01 22:55:15.603780 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-01 22:55:15.603791 | orchestrator | Sunday 01 June 2025 22:49:54 +0000 (0:00:00.822) 0:01:11.266 *********** 2025-06-01 22:55:15.603804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-01 22:55:15.603816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.603836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.603848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.603873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-01 22:55:15.603885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.603896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.603908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.603926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-01 22:55:15.603938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.603961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.603973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.603984 | orchestrator | 2025-06-01 22:55:15.603995 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-01 22:55:15.604006 | orchestrator | Sunday 01 June 2025 22:49:58 +0000 (0:00:04.193) 0:01:15.460 *********** 2025-06-01 22:55:15.604017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-01 22:55:15.604029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.604047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.604059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.604070 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.604090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-01 22:55:15.604153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.604173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-01 22:55:15.604192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.604204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.604215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.604226 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.604251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.604263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.604274 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.604285 | orchestrator | 2025-06-01 22:55:15.604296 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-01 22:55:15.604308 | orchestrator | Sunday 01 June 2025 22:49:59 +0000 (0:00:00.763) 0:01:16.224 *********** 2025-06-01 22:55:15.604319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-01 22:55:15.604331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-01 22:55:15.604349 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.604361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-01 22:55:15.604372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-01 22:55:15.604383 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.604394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-01 22:55:15.604405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-01 22:55:15.604416 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.604427 | orchestrator | 2025-06-01 22:55:15.604438 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-01 22:55:15.604449 | orchestrator | Sunday 01 June 2025 22:50:00 +0000 (0:00:01.224) 0:01:17.449 *********** 2025-06-01 22:55:15.604459 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.604470 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.604481 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.604491 | orchestrator | 2025-06-01 22:55:15.604502 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-01 22:55:15.604513 | orchestrator | Sunday 01 June 2025 22:50:01 +0000 (0:00:01.350) 0:01:18.800 *********** 2025-06-01 22:55:15.604523 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.604534 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.604544 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.604555 | orchestrator | 2025-06-01 22:55:15.604566 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-01 22:55:15.604576 | orchestrator | Sunday 01 June 2025 22:50:03 +0000 (0:00:02.133) 0:01:20.934 *********** 2025-06-01 22:55:15.604587 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.604598 | orchestrator | 2025-06-01 22:55:15.604609 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-01 22:55:15.604619 | orchestrator | Sunday 01 June 2025 22:50:04 +0000 (0:00:00.636) 0:01:21.570 *********** 2025-06-01 22:55:15.604645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.604657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.604709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.604721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.604733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.606404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.606432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.606451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.606462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.606472 | orchestrator | 2025-06-01 22:55:15.606482 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-01 22:55:15.606491 | orchestrator | Sunday 01 June 2025 22:50:10 +0000 (0:00:06.090) 0:01:27.660 *********** 2025-06-01 22:55:15.606502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.606512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.606553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.606573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.606584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.606594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.606604 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.606614 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.606624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.606653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.606685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.606702 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.606712 | orchestrator | 2025-06-01 22:55:15.606722 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-01 22:55:15.606732 | orchestrator | Sunday 01 June 2025 22:50:11 +0000 (0:00:00.715) 0:01:28.376 *********** 2025-06-01 22:55:15.606742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 22:55:15.606753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 22:55:15.606763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 22:55:15.606775 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.606784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 22:55:15.606794 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.606804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 22:55:15.606814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 22:55:15.606824 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.606833 | orchestrator | 2025-06-01 22:55:15.606843 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-01 22:55:15.606853 | orchestrator | Sunday 01 June 2025 22:50:12 +0000 (0:00:00.998) 0:01:29.375 *********** 2025-06-01 22:55:15.606862 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.606872 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.606881 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.606891 | orchestrator | 2025-06-01 22:55:15.606901 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-01 22:55:15.606910 | orchestrator | Sunday 01 June 2025 22:50:13 +0000 (0:00:01.527) 0:01:30.902 *********** 2025-06-01 22:55:15.606920 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.606929 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.606939 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.606948 | orchestrator | 2025-06-01 22:55:15.606958 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-01 22:55:15.606968 | orchestrator | Sunday 01 June 2025 22:50:15 +0000 (0:00:02.113) 0:01:33.016 *********** 2025-06-01 22:55:15.606977 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.606987 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.606996 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.607006 | orchestrator | 2025-06-01 22:55:15.607015 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-01 22:55:15.607025 | orchestrator | Sunday 01 June 2025 22:50:16 +0000 (0:00:00.691) 0:01:33.708 *********** 2025-06-01 22:55:15.607041 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.607050 | orchestrator | 2025-06-01 22:55:15.607060 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-01 22:55:15.607070 | orchestrator | Sunday 01 June 2025 22:50:17 +0000 (0:00:01.166) 0:01:34.874 *********** 2025-06-01 22:55:15.607143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-01 22:55:15.607158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-01 22:55:15.607169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-01 22:55:15.607179 | orchestrator | 2025-06-01 22:55:15.607189 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-01 22:55:15.607198 | orchestrator | Sunday 01 June 2025 22:50:21 +0000 (0:00:03.330) 0:01:38.204 *********** 2025-06-01 22:55:15.607208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-01 22:55:15.607229 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.607239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-01 22:55:15.607249 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.607283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-01 22:55:15.607295 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.607304 | orchestrator | 2025-06-01 22:55:15.607314 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-01 22:55:15.607324 | orchestrator | Sunday 01 June 2025 22:50:24 +0000 (0:00:03.497) 0:01:41.701 *********** 2025-06-01 22:55:15.607334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 22:55:15.607346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 22:55:15.607358 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.607368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 22:55:15.607378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 22:55:15.607388 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.607404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 22:55:15.607415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 22:55:15.607425 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.607435 | orchestrator | 2025-06-01 22:55:15.607444 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-01 22:55:15.607454 | orchestrator | Sunday 01 June 2025 22:50:27 +0000 (0:00:03.144) 0:01:44.846 *********** 2025-06-01 22:55:15.607463 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.607473 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.607483 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.607492 | orchestrator | 2025-06-01 22:55:15.607516 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-01 22:55:15.607526 | orchestrator | Sunday 01 June 2025 22:50:28 +0000 (0:00:00.763) 0:01:45.610 *********** 2025-06-01 22:55:15.607536 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.607545 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.607555 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.607564 | orchestrator | 2025-06-01 22:55:15.607574 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-01 22:55:15.607602 | orchestrator | Sunday 01 June 2025 22:50:29 +0000 (0:00:01.219) 0:01:46.829 *********** 2025-06-01 22:55:15.607618 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.607628 | orchestrator | 2025-06-01 22:55:15.607638 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-01 22:55:15.607647 | orchestrator | Sunday 01 June 2025 22:50:30 +0000 (0:00:00.587) 0:01:47.417 *********** 2025-06-01 22:55:15.607657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.607683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.607700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.607733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.607763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.607780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.607791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.607801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.607818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.607828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.607861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.607872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.607882 | orchestrator | 2025-06-01 22:55:15.607892 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-01 22:55:15.607909 | orchestrator | Sunday 01 June 2025 22:50:33 +0000 (0:00:03.577) 0:01:50.995 *********** 2025-06-01 22:55:15.607920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.607930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.607940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.607973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.607984 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.607994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.608011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608041 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.608075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.608120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608161 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.608171 | orchestrator | 2025-06-01 22:55:15.608181 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-01 22:55:15.608190 | orchestrator | Sunday 01 June 2025 22:50:34 +0000 (0:00:01.036) 0:01:52.031 *********** 2025-06-01 22:55:15.608200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 22:55:15.608210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 22:55:15.608220 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.608230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 22:55:15.608240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 22:55:15.608250 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.608293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 22:55:15.608310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 22:55:15.608320 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.608330 | orchestrator | 2025-06-01 22:55:15.608340 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-01 22:55:15.608350 | orchestrator | Sunday 01 June 2025 22:50:36 +0000 (0:00:01.166) 0:01:53.198 *********** 2025-06-01 22:55:15.608359 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.608369 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.608386 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.608395 | orchestrator | 2025-06-01 22:55:15.608405 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-01 22:55:15.608415 | orchestrator | Sunday 01 June 2025 22:50:37 +0000 (0:00:01.471) 0:01:54.670 *********** 2025-06-01 22:55:15.608424 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.608434 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.608443 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.608453 | orchestrator | 2025-06-01 22:55:15.608462 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-01 22:55:15.608472 | orchestrator | Sunday 01 June 2025 22:50:39 +0000 (0:00:02.141) 0:01:56.811 *********** 2025-06-01 22:55:15.608481 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.608491 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.608501 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.608510 | orchestrator | 2025-06-01 22:55:15.608520 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-01 22:55:15.608529 | orchestrator | Sunday 01 June 2025 22:50:40 +0000 (0:00:00.694) 0:01:57.506 *********** 2025-06-01 22:55:15.608539 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.608548 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.608558 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.608567 | orchestrator | 2025-06-01 22:55:15.608577 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-01 22:55:15.608586 | orchestrator | Sunday 01 June 2025 22:50:40 +0000 (0:00:00.485) 0:01:57.992 *********** 2025-06-01 22:55:15.608596 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.608619 | orchestrator | 2025-06-01 22:55:15.608629 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-01 22:55:15.608639 | orchestrator | Sunday 01 June 2025 22:50:41 +0000 (0:00:01.046) 0:01:59.038 *********** 2025-06-01 22:55:15.608649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 22:55:15.608660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 22:55:15.608687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 22:55:15.608784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 22:55:15.608827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 22:55:15.608906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 22:55:15.608917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.608973 | orchestrator | 2025-06-01 22:55:15.608983 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-01 22:55:15.608993 | orchestrator | Sunday 01 June 2025 22:50:46 +0000 (0:00:04.921) 0:02:03.959 *********** 2025-06-01 22:55:15.609013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 22:55:15.609024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 22:55:15.609034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609091 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.609125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 22:55:15.609136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 22:55:15.609146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609224 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.609235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 22:55:15.609245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 22:55:15.609255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.609336 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.609346 | orchestrator | 2025-06-01 22:55:15.609356 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-01 22:55:15.609366 | orchestrator | Sunday 01 June 2025 22:50:47 +0000 (0:00:01.133) 0:02:05.093 *********** 2025-06-01 22:55:15.609376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-01 22:55:15.609386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-01 22:55:15.609396 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.609406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-01 22:55:15.609415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-01 22:55:15.609425 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.609435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-01 22:55:15.609445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-01 22:55:15.609477 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.609487 | orchestrator | 2025-06-01 22:55:15.609497 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-01 22:55:15.609507 | orchestrator | Sunday 01 June 2025 22:50:48 +0000 (0:00:00.988) 0:02:06.082 *********** 2025-06-01 22:55:15.609517 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.609526 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.609548 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.609558 | orchestrator | 2025-06-01 22:55:15.609567 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-01 22:55:15.609577 | orchestrator | Sunday 01 June 2025 22:50:50 +0000 (0:00:01.768) 0:02:07.850 *********** 2025-06-01 22:55:15.609587 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.609596 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.609606 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.609616 | orchestrator | 2025-06-01 22:55:15.609626 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-01 22:55:15.609636 | orchestrator | Sunday 01 June 2025 22:50:52 +0000 (0:00:01.954) 0:02:09.804 *********** 2025-06-01 22:55:15.609645 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.609720 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.609731 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.609741 | orchestrator | 2025-06-01 22:55:15.609751 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-01 22:55:15.609760 | orchestrator | Sunday 01 June 2025 22:50:53 +0000 (0:00:00.334) 0:02:10.139 *********** 2025-06-01 22:55:15.609770 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.609779 | orchestrator | 2025-06-01 22:55:15.609789 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-01 22:55:15.609799 | orchestrator | Sunday 01 June 2025 22:50:53 +0000 (0:00:00.794) 0:02:10.933 *********** 2025-06-01 22:55:15.609879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 22:55:15.609901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 22:55:15.609946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 22:55:15.609960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 22:55:15.609988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 22:55:15.610000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 22:55:15.610054 | orchestrator | 2025-06-01 22:55:15.610067 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-01 22:55:15.610077 | orchestrator | Sunday 01 June 2025 22:50:58 +0000 (0:00:04.323) 0:02:15.257 *********** 2025-06-01 22:55:15.610107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 22:55:15.610125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 22:55:15.610144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 22:55:15.610155 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.610177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 22:55:15.610195 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.610206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 22:55:15.610241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 22:55:15.610260 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.610270 | orchestrator | 2025-06-01 22:55:15.610280 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-01 22:55:15.610290 | orchestrator | Sunday 01 June 2025 22:51:00 +0000 (0:00:02.842) 0:02:18.100 *********** 2025-06-01 22:55:15.610300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 22:55:15.610311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 22:55:15.610322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 22:55:15.610332 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.610342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 22:55:15.610352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 22:55:15.610362 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.610415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 22:55:15.610428 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.610438 | orchestrator | 2025-06-01 22:55:15.610448 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-01 22:55:15.610465 | orchestrator | Sunday 01 June 2025 22:51:04 +0000 (0:00:03.193) 0:02:21.294 *********** 2025-06-01 22:55:15.610474 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.610484 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.610493 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.610503 | orchestrator | 2025-06-01 22:55:15.610512 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-01 22:55:15.610541 | orchestrator | Sunday 01 June 2025 22:51:05 +0000 (0:00:01.496) 0:02:22.791 *********** 2025-06-01 22:55:15.610551 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.610561 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.610570 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.610580 | orchestrator | 2025-06-01 22:55:15.610590 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-01 22:55:15.610599 | orchestrator | Sunday 01 June 2025 22:51:07 +0000 (0:00:02.006) 0:02:24.798 *********** 2025-06-01 22:55:15.610608 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.610618 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.610628 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.610637 | orchestrator | 2025-06-01 22:55:15.610647 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-01 22:55:15.610656 | orchestrator | Sunday 01 June 2025 22:51:07 +0000 (0:00:00.304) 0:02:25.102 *********** 2025-06-01 22:55:15.610689 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.610699 | orchestrator | 2025-06-01 22:55:15.610708 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-01 22:55:15.610718 | orchestrator | Sunday 01 June 2025 22:51:08 +0000 (0:00:00.834) 0:02:25.937 *********** 2025-06-01 22:55:15.610728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 22:55:15.610738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 22:55:15.610749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 22:55:15.610771 | orchestrator | 2025-06-01 22:55:15.610781 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-01 22:55:15.610801 | orchestrator | Sunday 01 June 2025 22:51:12 +0000 (0:00:03.567) 0:02:29.504 *********** 2025-06-01 22:55:15.610836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 22:55:15.610848 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.610858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 22:55:15.610868 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.610878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 22:55:15.610888 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.610897 | orchestrator | 2025-06-01 22:55:15.610907 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-01 22:55:15.610917 | orchestrator | Sunday 01 June 2025 22:51:12 +0000 (0:00:00.422) 0:02:29.927 *********** 2025-06-01 22:55:15.610926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-01 22:55:15.610937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-01 22:55:15.610946 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.610956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-01 22:55:15.610966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-01 22:55:15.610975 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.610985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-01 22:55:15.610995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-01 22:55:15.611011 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.611020 | orchestrator | 2025-06-01 22:55:15.611030 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-01 22:55:15.611040 | orchestrator | Sunday 01 June 2025 22:51:13 +0000 (0:00:00.726) 0:02:30.653 *********** 2025-06-01 22:55:15.611049 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.611059 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.611068 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.611078 | orchestrator | 2025-06-01 22:55:15.611087 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-01 22:55:15.611097 | orchestrator | Sunday 01 June 2025 22:51:15 +0000 (0:00:01.639) 0:02:32.293 *********** 2025-06-01 22:55:15.611106 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.611116 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.611125 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.611135 | orchestrator | 2025-06-01 22:55:15.611162 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-01 22:55:15.611178 | orchestrator | Sunday 01 June 2025 22:51:17 +0000 (0:00:02.061) 0:02:34.354 *********** 2025-06-01 22:55:15.611188 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.611198 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.611208 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.611217 | orchestrator | 2025-06-01 22:55:15.611227 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-01 22:55:15.611237 | orchestrator | Sunday 01 June 2025 22:51:17 +0000 (0:00:00.359) 0:02:34.713 *********** 2025-06-01 22:55:15.611246 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.611255 | orchestrator | 2025-06-01 22:55:15.611265 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-01 22:55:15.611274 | orchestrator | Sunday 01 June 2025 22:51:18 +0000 (0:00:00.970) 0:02:35.684 *********** 2025-06-01 22:55:15.611285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 22:55:15.611329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 22:55:15.611342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 22:55:15.611360 | orchestrator | 2025-06-01 22:55:15.611370 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-01 22:55:15.611379 | orchestrator | Sunday 01 June 2025 22:51:22 +0000 (0:00:03.565) 0:02:39.250 *********** 2025-06-01 22:55:15.611414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 22:55:15.611427 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.611438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 22:55:15.611454 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.611490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 22:55:15.611502 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.611512 | orchestrator | 2025-06-01 22:55:15.611522 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-01 22:55:15.611532 | orchestrator | Sunday 01 June 2025 22:51:22 +0000 (0:00:00.655) 0:02:39.905 *********** 2025-06-01 22:55:15.611542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 22:55:15.611559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 22:55:15.611570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 22:55:15.611580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 22:55:15.611590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-01 22:55:15.611619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 22:55:15.611629 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.611639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 22:55:15.611687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 22:55:15.611699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 22:55:15.611709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-01 22:55:15.611718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 22:55:15.611728 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.611738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 22:55:15.611748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 22:55:15.611766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 22:55:15.611776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-01 22:55:15.611786 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.611812 | orchestrator | 2025-06-01 22:55:15.611822 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-01 22:55:15.611832 | orchestrator | Sunday 01 June 2025 22:51:23 +0000 (0:00:00.964) 0:02:40.870 *********** 2025-06-01 22:55:15.611842 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.611851 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.611861 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.611870 | orchestrator | 2025-06-01 22:55:15.611880 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-01 22:55:15.611890 | orchestrator | Sunday 01 June 2025 22:51:25 +0000 (0:00:01.654) 0:02:42.525 *********** 2025-06-01 22:55:15.611899 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.611909 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.611918 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.611928 | orchestrator | 2025-06-01 22:55:15.611938 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-01 22:55:15.611947 | orchestrator | Sunday 01 June 2025 22:51:27 +0000 (0:00:02.103) 0:02:44.629 *********** 2025-06-01 22:55:15.611957 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.611966 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.611976 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.611986 | orchestrator | 2025-06-01 22:55:15.611995 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-01 22:55:15.612005 | orchestrator | Sunday 01 June 2025 22:51:27 +0000 (0:00:00.324) 0:02:44.953 *********** 2025-06-01 22:55:15.612014 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.612024 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.612045 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.612055 | orchestrator | 2025-06-01 22:55:15.612065 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-01 22:55:15.612074 | orchestrator | Sunday 01 June 2025 22:51:28 +0000 (0:00:00.304) 0:02:45.257 *********** 2025-06-01 22:55:15.612084 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.612094 | orchestrator | 2025-06-01 22:55:15.612103 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-01 22:55:15.612113 | orchestrator | Sunday 01 June 2025 22:51:29 +0000 (0:00:01.349) 0:02:46.606 *********** 2025-06-01 22:55:15.612148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 22:55:15.612168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 22:55:15.612179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 22:55:15.612190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 22:55:15.612201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 22:55:15.612224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 22:55:15.612236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 22:55:15.612253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 22:55:15.612264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 22:55:15.612274 | orchestrator | 2025-06-01 22:55:15.612284 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-01 22:55:15.612293 | orchestrator | Sunday 01 June 2025 22:51:32 +0000 (0:00:03.450) 0:02:50.057 *********** 2025-06-01 22:55:15.612304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 22:55:15.612338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 22:55:15.612356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 22:55:15.612366 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.612377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 22:55:15.612388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 22:55:15.612398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 22:55:15.612408 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.612442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 22:55:15.612463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 22:55:15.612473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 22:55:15.612483 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.612493 | orchestrator | 2025-06-01 22:55:15.612503 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-01 22:55:15.612512 | orchestrator | Sunday 01 June 2025 22:51:33 +0000 (0:00:00.685) 0:02:50.742 *********** 2025-06-01 22:55:15.612523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 22:55:15.612533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 22:55:15.612543 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.612553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 22:55:15.612563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 22:55:15.612573 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.612583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 22:55:15.612594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 22:55:15.612604 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.612613 | orchestrator | 2025-06-01 22:55:15.612623 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-01 22:55:15.612633 | orchestrator | Sunday 01 June 2025 22:51:34 +0000 (0:00:01.055) 0:02:51.797 *********** 2025-06-01 22:55:15.612642 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.612658 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.612712 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.612722 | orchestrator | 2025-06-01 22:55:15.612732 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-01 22:55:15.612742 | orchestrator | Sunday 01 June 2025 22:51:36 +0000 (0:00:01.720) 0:02:53.518 *********** 2025-06-01 22:55:15.612752 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.612761 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.612771 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.612781 | orchestrator | 2025-06-01 22:55:15.612790 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-01 22:55:15.612831 | orchestrator | Sunday 01 June 2025 22:51:38 +0000 (0:00:02.110) 0:02:55.629 *********** 2025-06-01 22:55:15.612842 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.612852 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.612867 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.612877 | orchestrator | 2025-06-01 22:55:15.612886 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-01 22:55:15.612896 | orchestrator | Sunday 01 June 2025 22:51:38 +0000 (0:00:00.366) 0:02:55.996 *********** 2025-06-01 22:55:15.612905 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.612915 | orchestrator | 2025-06-01 22:55:15.612924 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-01 22:55:15.612934 | orchestrator | Sunday 01 June 2025 22:51:40 +0000 (0:00:01.414) 0:02:57.411 *********** 2025-06-01 22:55:15.612944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 22:55:15.612955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.612965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 22:55:15.612983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 22:55:15.613029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613039 | orchestrator | 2025-06-01 22:55:15.613049 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-01 22:55:15.613058 | orchestrator | Sunday 01 June 2025 22:51:43 +0000 (0:00:03.409) 0:03:00.820 *********** 2025-06-01 22:55:15.613069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 22:55:15.613079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613096 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.613130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 22:55:15.613141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613151 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.613162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 22:55:15.613172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613188 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.613198 | orchestrator | 2025-06-01 22:55:15.613208 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-01 22:55:15.613218 | orchestrator | Sunday 01 June 2025 22:51:44 +0000 (0:00:00.736) 0:03:01.557 *********** 2025-06-01 22:55:15.613228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-01 22:55:15.613238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-01 22:55:15.613249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-01 22:55:15.613258 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.613268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-01 22:55:15.613278 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.613288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-01 22:55:15.613298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-01 22:55:15.613325 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.613335 | orchestrator | 2025-06-01 22:55:15.613345 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-01 22:55:15.613360 | orchestrator | Sunday 01 June 2025 22:51:46 +0000 (0:00:01.609) 0:03:03.167 *********** 2025-06-01 22:55:15.613370 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.613380 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.613390 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.613399 | orchestrator | 2025-06-01 22:55:15.613409 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-01 22:55:15.613419 | orchestrator | Sunday 01 June 2025 22:51:47 +0000 (0:00:01.367) 0:03:04.535 *********** 2025-06-01 22:55:15.613446 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.613457 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.613466 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.613476 | orchestrator | 2025-06-01 22:55:15.613485 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-01 22:55:15.613495 | orchestrator | Sunday 01 June 2025 22:51:49 +0000 (0:00:02.019) 0:03:06.555 *********** 2025-06-01 22:55:15.613504 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.613514 | orchestrator | 2025-06-01 22:55:15.613523 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-01 22:55:15.613533 | orchestrator | Sunday 01 June 2025 22:51:50 +0000 (0:00:01.054) 0:03:07.609 *********** 2025-06-01 22:55:15.613543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-01 22:55:15.613559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-01 22:55:15.613629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-01 22:55:15.613716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613772 | orchestrator | 2025-06-01 22:55:15.613801 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-01 22:55:15.613818 | orchestrator | Sunday 01 June 2025 22:51:55 +0000 (0:00:04.610) 0:03:12.219 *********** 2025-06-01 22:55:15.613828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-01 22:55:15.613838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613869 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.613899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-01 22:55:15.613911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.613948 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.614000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-01 22:55:15.614087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.614124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.614143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.614153 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.614162 | orchestrator | 2025-06-01 22:55:15.614170 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-01 22:55:15.614178 | orchestrator | Sunday 01 June 2025 22:51:56 +0000 (0:00:01.011) 0:03:13.231 *********** 2025-06-01 22:55:15.614186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-01 22:55:15.614194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-01 22:55:15.614202 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.614210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-01 22:55:15.614219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-01 22:55:15.614227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-01 22:55:15.614235 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.614243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-01 22:55:15.614250 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.614259 | orchestrator | 2025-06-01 22:55:15.614267 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-01 22:55:15.614274 | orchestrator | Sunday 01 June 2025 22:51:57 +0000 (0:00:00.891) 0:03:14.122 *********** 2025-06-01 22:55:15.614282 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.614290 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.614298 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.614306 | orchestrator | 2025-06-01 22:55:15.614314 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-01 22:55:15.614322 | orchestrator | Sunday 01 June 2025 22:51:58 +0000 (0:00:01.637) 0:03:15.760 *********** 2025-06-01 22:55:15.614329 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.614337 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.614345 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.614353 | orchestrator | 2025-06-01 22:55:15.614361 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-01 22:55:15.614369 | orchestrator | Sunday 01 June 2025 22:52:00 +0000 (0:00:01.936) 0:03:17.696 *********** 2025-06-01 22:55:15.614376 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.614384 | orchestrator | 2025-06-01 22:55:15.614392 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-01 22:55:15.614400 | orchestrator | Sunday 01 June 2025 22:52:01 +0000 (0:00:01.181) 0:03:18.877 *********** 2025-06-01 22:55:15.614408 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 22:55:15.614435 | orchestrator | 2025-06-01 22:55:15.614443 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-01 22:55:15.614451 | orchestrator | Sunday 01 June 2025 22:52:04 +0000 (0:00:02.969) 0:03:21.847 *********** 2025-06-01 22:55:15.614484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:55:15.614495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 22:55:15.614503 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.614522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:55:15.614537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:55:15.614547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 22:55:15.614555 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.614563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 22:55:15.614571 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.614579 | orchestrator | 2025-06-01 22:55:15.614587 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-01 22:55:15.614604 | orchestrator | Sunday 01 June 2025 22:52:07 +0000 (0:00:02.462) 0:03:24.310 *********** 2025-06-01 22:55:15.614634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:55:15.614644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 22:55:15.614653 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.614661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:55:15.614700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 22:55:15.614709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:55:15.614718 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.614726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 22:55:15.614735 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.614743 | orchestrator | 2025-06-01 22:55:15.614751 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-01 22:55:15.614759 | orchestrator | Sunday 01 June 2025 22:52:09 +0000 (0:00:02.121) 0:03:26.431 *********** 2025-06-01 22:55:15.614767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 22:55:15.614797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 22:55:15.614810 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.614819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 22:55:15.614827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 22:55:15.614835 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.614843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 22:55:15.614851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 22:55:15.614859 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.614867 | orchestrator | 2025-06-01 22:55:15.614875 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-01 22:55:15.614883 | orchestrator | Sunday 01 June 2025 22:52:11 +0000 (0:00:02.579) 0:03:29.010 *********** 2025-06-01 22:55:15.614896 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.614904 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.614912 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.614920 | orchestrator | 2025-06-01 22:55:15.614928 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-01 22:55:15.614936 | orchestrator | Sunday 01 June 2025 22:52:14 +0000 (0:00:02.322) 0:03:31.332 *********** 2025-06-01 22:55:15.614944 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.614952 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.614960 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.614967 | orchestrator | 2025-06-01 22:55:15.614975 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-01 22:55:15.614984 | orchestrator | Sunday 01 June 2025 22:52:15 +0000 (0:00:01.477) 0:03:32.810 *********** 2025-06-01 22:55:15.614991 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.614999 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.615007 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.615015 | orchestrator | 2025-06-01 22:55:15.615023 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-01 22:55:15.615031 | orchestrator | Sunday 01 June 2025 22:52:16 +0000 (0:00:00.340) 0:03:33.150 *********** 2025-06-01 22:55:15.615039 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.615047 | orchestrator | 2025-06-01 22:55:15.615054 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-01 22:55:15.615062 | orchestrator | Sunday 01 June 2025 22:52:17 +0000 (0:00:01.076) 0:03:34.227 *********** 2025-06-01 22:55:15.615098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-01 22:55:15.615108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-01 22:55:15.615117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-01 22:55:15.615130 | orchestrator | 2025-06-01 22:55:15.615139 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-01 22:55:15.615146 | orchestrator | Sunday 01 June 2025 22:52:18 +0000 (0:00:01.672) 0:03:35.899 *********** 2025-06-01 22:55:15.615155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-01 22:55:15.615178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-01 22:55:15.615186 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.615194 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.615213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-01 22:55:15.615222 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.615229 | orchestrator | 2025-06-01 22:55:15.615237 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-01 22:55:15.615245 | orchestrator | Sunday 01 June 2025 22:52:19 +0000 (0:00:00.398) 0:03:36.297 *********** 2025-06-01 22:55:15.615253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-01 22:55:15.615263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-01 22:55:15.615271 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.615278 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.615287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-01 22:55:15.615300 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.615308 | orchestrator | 2025-06-01 22:55:15.615316 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-01 22:55:15.615324 | orchestrator | Sunday 01 June 2025 22:52:19 +0000 (0:00:00.580) 0:03:36.878 *********** 2025-06-01 22:55:15.615332 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.615340 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.615348 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.615356 | orchestrator | 2025-06-01 22:55:15.615364 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-01 22:55:15.615372 | orchestrator | Sunday 01 June 2025 22:52:20 +0000 (0:00:00.728) 0:03:37.607 *********** 2025-06-01 22:55:15.615380 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.615388 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.615396 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.615403 | orchestrator | 2025-06-01 22:55:15.615411 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-01 22:55:15.615419 | orchestrator | Sunday 01 June 2025 22:52:21 +0000 (0:00:01.364) 0:03:38.972 *********** 2025-06-01 22:55:15.615427 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.615435 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.615443 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.615451 | orchestrator | 2025-06-01 22:55:15.615459 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-01 22:55:15.615467 | orchestrator | Sunday 01 June 2025 22:52:22 +0000 (0:00:00.359) 0:03:39.332 *********** 2025-06-01 22:55:15.615475 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.615483 | orchestrator | 2025-06-01 22:55:15.615490 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-01 22:55:15.615498 | orchestrator | Sunday 01 June 2025 22:52:23 +0000 (0:00:01.700) 0:03:41.033 *********** 2025-06-01 22:55:15.615507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 22:55:15.615539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 22:55:15.615592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.615630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.615640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 22:55:15.615699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 22:55:15.615718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.615736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 22:55:15.615745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 22:55:15.615769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 22:55:15.615799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 22:55:15.615844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.615861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.615889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 22:55:15.615912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 22:55:15.615921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 22:55:15.615954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.615976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.615997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 22:55:15.616026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 22:55:15.616034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 22:55:15.616041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 22:55:15.616109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 22:55:15.616170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 22:55:15.616177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616184 | orchestrator | 2025-06-01 22:55:15.616191 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-01 22:55:15.616198 | orchestrator | Sunday 01 June 2025 22:52:28 +0000 (0:00:04.945) 0:03:45.979 *********** 2025-06-01 22:55:15.616205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 22:55:15.616212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 22:55:15.616264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 22:55:15.616271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 22:55:15.616351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 22:55:15.616390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 22:55:15.616473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 22:55:15.616487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 22:55:15.616511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616519 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.616526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 22:55:15.616559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 22:55:15.616598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 22:55:15.616618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616648 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.616655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 22:55:15.616675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 22:55:15.616732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 22:55:15.616754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 22:55:15.616774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 22:55:15.616796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.616804 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.616811 | orchestrator | 2025-06-01 22:55:15.616818 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-01 22:55:15.616824 | orchestrator | Sunday 01 June 2025 22:52:30 +0000 (0:00:01.639) 0:03:47.618 *********** 2025-06-01 22:55:15.616832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-01 22:55:15.616839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-01 22:55:15.616858 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.616865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-01 22:55:15.616871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-01 22:55:15.616884 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.616891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-01 22:55:15.616906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-01 22:55:15.616913 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.616920 | orchestrator | 2025-06-01 22:55:15.616927 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-01 22:55:15.616934 | orchestrator | Sunday 01 June 2025 22:52:32 +0000 (0:00:02.128) 0:03:49.747 *********** 2025-06-01 22:55:15.616940 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.616947 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.616953 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.616960 | orchestrator | 2025-06-01 22:55:15.616967 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-01 22:55:15.616973 | orchestrator | Sunday 01 June 2025 22:52:33 +0000 (0:00:01.296) 0:03:51.043 *********** 2025-06-01 22:55:15.616980 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.616987 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.616993 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.617000 | orchestrator | 2025-06-01 22:55:15.617007 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-01 22:55:15.617013 | orchestrator | Sunday 01 June 2025 22:52:35 +0000 (0:00:02.021) 0:03:53.065 *********** 2025-06-01 22:55:15.617020 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.617027 | orchestrator | 2025-06-01 22:55:15.617033 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-01 22:55:15.617040 | orchestrator | Sunday 01 June 2025 22:52:37 +0000 (0:00:01.165) 0:03:54.231 *********** 2025-06-01 22:55:15.617047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.617070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.617096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.617104 | orchestrator | 2025-06-01 22:55:15.617110 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-01 22:55:15.617117 | orchestrator | Sunday 01 June 2025 22:52:40 +0000 (0:00:03.393) 0:03:57.624 *********** 2025-06-01 22:55:15.617124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.617131 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.617138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.617145 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.617183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.617197 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.617203 | orchestrator | 2025-06-01 22:55:15.617210 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-01 22:55:15.617217 | orchestrator | Sunday 01 June 2025 22:52:41 +0000 (0:00:00.626) 0:03:58.251 *********** 2025-06-01 22:55:15.617223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617238 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.617245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617258 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.617265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617279 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.617286 | orchestrator | 2025-06-01 22:55:15.617292 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-01 22:55:15.617299 | orchestrator | Sunday 01 June 2025 22:52:42 +0000 (0:00:00.872) 0:03:59.124 *********** 2025-06-01 22:55:15.617306 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.617312 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.617319 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.617326 | orchestrator | 2025-06-01 22:55:15.617333 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-01 22:55:15.617339 | orchestrator | Sunday 01 June 2025 22:52:43 +0000 (0:00:01.555) 0:04:00.680 *********** 2025-06-01 22:55:15.617346 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.617353 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.617359 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.617366 | orchestrator | 2025-06-01 22:55:15.617372 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-01 22:55:15.617379 | orchestrator | Sunday 01 June 2025 22:52:45 +0000 (0:00:02.037) 0:04:02.717 *********** 2025-06-01 22:55:15.617386 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.617393 | orchestrator | 2025-06-01 22:55:15.617400 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-01 22:55:15.617406 | orchestrator | Sunday 01 June 2025 22:52:46 +0000 (0:00:01.237) 0:04:03.955 *********** 2025-06-01 22:55:15.617434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.617448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.617455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.617463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.617470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.617492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.617508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.617516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.617524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.617530 | orchestrator | 2025-06-01 22:55:15.617537 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-01 22:55:15.617544 | orchestrator | Sunday 01 June 2025 22:52:51 +0000 (0:00:04.468) 0:04:08.424 *********** 2025-06-01 22:55:15.617551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.617584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.617592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.617599 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.617606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.617614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.617621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.617634 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.617660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.617680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.617687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.617694 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.617701 | orchestrator | 2025-06-01 22:55:15.617708 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-01 22:55:15.617715 | orchestrator | Sunday 01 June 2025 22:52:52 +0000 (0:00:00.970) 0:04:09.394 *********** 2025-06-01 22:55:15.617722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617756 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.617763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617791 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.617812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 22:55:15.617845 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.617851 | orchestrator | 2025-06-01 22:55:15.617858 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-01 22:55:15.617864 | orchestrator | Sunday 01 June 2025 22:52:53 +0000 (0:00:00.928) 0:04:10.322 *********** 2025-06-01 22:55:15.617871 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.617878 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.617885 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.617891 | orchestrator | 2025-06-01 22:55:15.617898 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-01 22:55:15.617905 | orchestrator | Sunday 01 June 2025 22:52:54 +0000 (0:00:01.672) 0:04:11.995 *********** 2025-06-01 22:55:15.617911 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.617918 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.617925 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.617931 | orchestrator | 2025-06-01 22:55:15.617938 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-01 22:55:15.617945 | orchestrator | Sunday 01 June 2025 22:52:57 +0000 (0:00:02.140) 0:04:14.136 *********** 2025-06-01 22:55:15.617951 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.617958 | orchestrator | 2025-06-01 22:55:15.617965 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-01 22:55:15.617971 | orchestrator | Sunday 01 June 2025 22:52:58 +0000 (0:00:01.552) 0:04:15.688 *********** 2025-06-01 22:55:15.617978 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-01 22:55:15.617990 | orchestrator | 2025-06-01 22:55:15.617997 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-01 22:55:15.618004 | orchestrator | Sunday 01 June 2025 22:52:59 +0000 (0:00:01.148) 0:04:16.837 *********** 2025-06-01 22:55:15.618011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-01 22:55:15.618044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-01 22:55:15.618052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-01 22:55:15.618059 | orchestrator | 2025-06-01 22:55:15.618066 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-01 22:55:15.618073 | orchestrator | Sunday 01 June 2025 22:53:03 +0000 (0:00:03.712) 0:04:20.550 *********** 2025-06-01 22:55:15.618097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 22:55:15.618105 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.618113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 22:55:15.618120 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.618127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 22:55:15.618133 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.618140 | orchestrator | 2025-06-01 22:55:15.618147 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-01 22:55:15.618159 | orchestrator | Sunday 01 June 2025 22:53:04 +0000 (0:00:01.228) 0:04:21.778 *********** 2025-06-01 22:55:15.618165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 22:55:15.618172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 22:55:15.618179 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.618186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 22:55:15.618193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 22:55:15.618200 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.618207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 22:55:15.618214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 22:55:15.618221 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.618227 | orchestrator | 2025-06-01 22:55:15.618234 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-01 22:55:15.618241 | orchestrator | Sunday 01 June 2025 22:53:06 +0000 (0:00:01.903) 0:04:23.681 *********** 2025-06-01 22:55:15.618247 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.618254 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.618261 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.618267 | orchestrator | 2025-06-01 22:55:15.618274 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-01 22:55:15.618281 | orchestrator | Sunday 01 June 2025 22:53:08 +0000 (0:00:02.294) 0:04:25.976 *********** 2025-06-01 22:55:15.618287 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.618294 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.618300 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.618307 | orchestrator | 2025-06-01 22:55:15.618313 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-01 22:55:15.618320 | orchestrator | Sunday 01 June 2025 22:53:11 +0000 (0:00:03.009) 0:04:28.985 *********** 2025-06-01 22:55:15.618327 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-01 22:55:15.618334 | orchestrator | 2025-06-01 22:55:15.618340 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-01 22:55:15.618361 | orchestrator | Sunday 01 June 2025 22:53:12 +0000 (0:00:00.876) 0:04:29.862 *********** 2025-06-01 22:55:15.618372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 22:55:15.618384 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.618391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 22:55:15.618398 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.618405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 22:55:15.618412 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.618419 | orchestrator | 2025-06-01 22:55:15.618426 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-01 22:55:15.618433 | orchestrator | Sunday 01 June 2025 22:53:14 +0000 (0:00:01.293) 0:04:31.155 *********** 2025-06-01 22:55:15.618439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 22:55:15.618446 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.618453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 22:55:15.618460 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.618467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 22:55:15.618474 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.618480 | orchestrator | 2025-06-01 22:55:15.618487 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-01 22:55:15.618494 | orchestrator | Sunday 01 June 2025 22:53:15 +0000 (0:00:01.603) 0:04:32.759 *********** 2025-06-01 22:55:15.618500 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.618507 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.618514 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.618520 | orchestrator | 2025-06-01 22:55:15.618527 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-01 22:55:15.618554 | orchestrator | Sunday 01 June 2025 22:53:16 +0000 (0:00:01.264) 0:04:34.024 *********** 2025-06-01 22:55:15.618561 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.618568 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.618578 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.618585 | orchestrator | 2025-06-01 22:55:15.618592 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-01 22:55:15.618599 | orchestrator | Sunday 01 June 2025 22:53:19 +0000 (0:00:02.369) 0:04:36.393 *********** 2025-06-01 22:55:15.618605 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.618612 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.618619 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.618625 | orchestrator | 2025-06-01 22:55:15.618632 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-01 22:55:15.618639 | orchestrator | Sunday 01 June 2025 22:53:22 +0000 (0:00:02.989) 0:04:39.383 *********** 2025-06-01 22:55:15.618646 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-01 22:55:15.618652 | orchestrator | 2025-06-01 22:55:15.618659 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-01 22:55:15.618701 | orchestrator | Sunday 01 June 2025 22:53:23 +0000 (0:00:01.070) 0:04:40.453 *********** 2025-06-01 22:55:15.618708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 22:55:15.618715 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.618722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 22:55:15.618729 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.618736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 22:55:15.618743 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.618749 | orchestrator | 2025-06-01 22:55:15.618756 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-01 22:55:15.618763 | orchestrator | Sunday 01 June 2025 22:53:24 +0000 (0:00:01.015) 0:04:41.468 *********** 2025-06-01 22:55:15.618770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 22:55:15.618782 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.618789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 22:55:15.618795 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.618822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 22:55:15.618830 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.618837 | orchestrator | 2025-06-01 22:55:15.618843 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-01 22:55:15.618850 | orchestrator | Sunday 01 June 2025 22:53:25 +0000 (0:00:01.367) 0:04:42.836 *********** 2025-06-01 22:55:15.618857 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.618863 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.618869 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.618875 | orchestrator | 2025-06-01 22:55:15.618882 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-01 22:55:15.618888 | orchestrator | Sunday 01 June 2025 22:53:27 +0000 (0:00:01.983) 0:04:44.820 *********** 2025-06-01 22:55:15.618894 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.618900 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.618907 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.618913 | orchestrator | 2025-06-01 22:55:15.618919 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-01 22:55:15.618925 | orchestrator | Sunday 01 June 2025 22:53:30 +0000 (0:00:02.337) 0:04:47.157 *********** 2025-06-01 22:55:15.618931 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.618937 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.618944 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.618950 | orchestrator | 2025-06-01 22:55:15.618956 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-01 22:55:15.618962 | orchestrator | Sunday 01 June 2025 22:53:33 +0000 (0:00:03.163) 0:04:50.321 *********** 2025-06-01 22:55:15.618968 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.618975 | orchestrator | 2025-06-01 22:55:15.618981 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-01 22:55:15.618987 | orchestrator | Sunday 01 June 2025 22:53:34 +0000 (0:00:01.314) 0:04:51.636 *********** 2025-06-01 22:55:15.618993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.619005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 22:55:15.619012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.619035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.619043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 22:55:15.619049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.619056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.619067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.619073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.619092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.619104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.619111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 22:55:15.619117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.619128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.619135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.619141 | orchestrator | 2025-06-01 22:55:15.619147 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-01 22:55:15.619153 | orchestrator | Sunday 01 June 2025 22:53:38 +0000 (0:00:03.784) 0:04:55.420 *********** 2025-06-01 22:55:15.619177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.619184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 22:55:15.619191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.619197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.619209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.619215 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.619222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.619244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 22:55:15.619251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.619258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.619269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.619275 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.619282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.619288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 22:55:15.619310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.619317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 22:55:15.619323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 22:55:15.619334 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.619340 | orchestrator | 2025-06-01 22:55:15.619347 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-01 22:55:15.619353 | orchestrator | Sunday 01 June 2025 22:53:39 +0000 (0:00:00.718) 0:04:56.139 *********** 2025-06-01 22:55:15.619360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 22:55:15.619366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 22:55:15.619373 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.619379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 22:55:15.619385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 22:55:15.619392 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.619398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 22:55:15.619405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 22:55:15.619411 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.619417 | orchestrator | 2025-06-01 22:55:15.619424 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-01 22:55:15.619430 | orchestrator | Sunday 01 June 2025 22:53:39 +0000 (0:00:00.932) 0:04:57.071 *********** 2025-06-01 22:55:15.619436 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.619442 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.619449 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.619455 | orchestrator | 2025-06-01 22:55:15.619461 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-01 22:55:15.619468 | orchestrator | Sunday 01 June 2025 22:53:41 +0000 (0:00:01.727) 0:04:58.799 *********** 2025-06-01 22:55:15.619474 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.619480 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.619486 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.619492 | orchestrator | 2025-06-01 22:55:15.619499 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-01 22:55:15.619505 | orchestrator | Sunday 01 June 2025 22:53:43 +0000 (0:00:02.056) 0:05:00.855 *********** 2025-06-01 22:55:15.619511 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.619517 | orchestrator | 2025-06-01 22:55:15.619524 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-01 22:55:15.619530 | orchestrator | Sunday 01 June 2025 22:53:45 +0000 (0:00:01.326) 0:05:02.181 *********** 2025-06-01 22:55:15.619556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:55:15.619568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:55:15.619575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:55:15.619582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:55:15.619607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:55:15.619619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:55:15.619626 | orchestrator | 2025-06-01 22:55:15.619632 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-01 22:55:15.619639 | orchestrator | Sunday 01 June 2025 22:53:51 +0000 (0:00:06.227) 0:05:08.408 *********** 2025-06-01 22:55:15.619645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 22:55:15.619652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 22:55:15.619659 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.619694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 22:55:15.619707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 22:55:15.619714 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.619720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 22:55:15.619727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 22:55:15.619734 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.619740 | orchestrator | 2025-06-01 22:55:15.619746 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-01 22:55:15.619753 | orchestrator | Sunday 01 June 2025 22:53:52 +0000 (0:00:01.130) 0:05:09.538 *********** 2025-06-01 22:55:15.619759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-01 22:55:15.619783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 22:55:15.619794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-01 22:55:15.619800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 22:55:15.619807 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.619813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 22:55:15.619820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 22:55:15.619826 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.619832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-01 22:55:15.619839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 22:55:15.619845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 22:55:15.619851 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.619858 | orchestrator | 2025-06-01 22:55:15.619864 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-01 22:55:15.619870 | orchestrator | Sunday 01 June 2025 22:53:53 +0000 (0:00:01.053) 0:05:10.592 *********** 2025-06-01 22:55:15.619876 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.619883 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.619889 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.619895 | orchestrator | 2025-06-01 22:55:15.619901 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-01 22:55:15.619907 | orchestrator | Sunday 01 June 2025 22:53:53 +0000 (0:00:00.401) 0:05:10.993 *********** 2025-06-01 22:55:15.619913 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.619920 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.619926 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.619932 | orchestrator | 2025-06-01 22:55:15.619938 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-01 22:55:15.619944 | orchestrator | Sunday 01 June 2025 22:53:55 +0000 (0:00:01.213) 0:05:12.206 *********** 2025-06-01 22:55:15.619950 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.619957 | orchestrator | 2025-06-01 22:55:15.619963 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-01 22:55:15.619969 | orchestrator | Sunday 01 June 2025 22:53:56 +0000 (0:00:01.621) 0:05:13.828 *********** 2025-06-01 22:55:15.619975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 22:55:15.620000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 22:55:15.620011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 22:55:15.620031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 22:55:15.620037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 22:55:15.620048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 22:55:15.620071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 22:55:15.620085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 22:55:15.620105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 22:55:15.620137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 22:55:15.620145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 22:55:15.620152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 22:55:15.620159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 22:55:15.620177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 22:55:15.620213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 22:55:15.620220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 22:55:15.620231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 22:55:15.620246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 22:55:15.620266 | orchestrator | 2025-06-01 22:55:15.620273 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-01 22:55:15.620279 | orchestrator | Sunday 01 June 2025 22:54:00 +0000 (0:00:04.259) 0:05:18.088 *********** 2025-06-01 22:55:15.620285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 22:55:15.620301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 22:55:15.620307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 22:55:15.620335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 22:55:15.620342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 22:55:15.620353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 22:55:15.620373 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.620386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 22:55:15.620393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 22:55:15.620400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 22:55:15.620424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 22:55:15.620438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 22:55:15.620446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 22:55:15.620452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 22:55:15.620469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 22:55:15.620482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620488 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.620503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 22:55:15.620517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 22:55:15.620528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 22:55:15.620534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 22:55:15.620555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 22:55:15.620561 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.620568 | orchestrator | 2025-06-01 22:55:15.620574 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-01 22:55:15.620581 | orchestrator | Sunday 01 June 2025 22:54:02 +0000 (0:00:01.708) 0:05:19.796 *********** 2025-06-01 22:55:15.620587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-01 22:55:15.620594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-01 22:55:15.620601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 22:55:15.620612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 22:55:15.620618 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.620624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-01 22:55:15.620631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-01 22:55:15.620637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 22:55:15.620644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 22:55:15.620650 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.620657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-01 22:55:15.620679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-01 22:55:15.620686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 22:55:15.620692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 22:55:15.620699 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.620705 | orchestrator | 2025-06-01 22:55:15.620711 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-01 22:55:15.620717 | orchestrator | Sunday 01 June 2025 22:54:03 +0000 (0:00:01.042) 0:05:20.839 *********** 2025-06-01 22:55:15.620724 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.620730 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.620740 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.620746 | orchestrator | 2025-06-01 22:55:15.620752 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-01 22:55:15.620758 | orchestrator | Sunday 01 June 2025 22:54:04 +0000 (0:00:00.442) 0:05:21.282 *********** 2025-06-01 22:55:15.620768 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.620775 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.620781 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.620787 | orchestrator | 2025-06-01 22:55:15.620794 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-01 22:55:15.620805 | orchestrator | Sunday 01 June 2025 22:54:05 +0000 (0:00:01.730) 0:05:23.013 *********** 2025-06-01 22:55:15.620811 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.620817 | orchestrator | 2025-06-01 22:55:15.620823 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-01 22:55:15.620829 | orchestrator | Sunday 01 June 2025 22:54:07 +0000 (0:00:01.772) 0:05:24.785 *********** 2025-06-01 22:55:15.620849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 22:55:15.620856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 22:55:15.620864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 22:55:15.620871 | orchestrator | 2025-06-01 22:55:15.620877 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-01 22:55:15.620883 | orchestrator | Sunday 01 June 2025 22:54:10 +0000 (0:00:02.448) 0:05:27.233 *********** 2025-06-01 22:55:15.620897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-01 22:55:15.620908 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.620915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-01 22:55:15.620922 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.620928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-01 22:55:15.620935 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.620941 | orchestrator | 2025-06-01 22:55:15.620947 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-01 22:55:15.620954 | orchestrator | Sunday 01 June 2025 22:54:10 +0000 (0:00:00.417) 0:05:27.650 *********** 2025-06-01 22:55:15.620960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-01 22:55:15.620966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-01 22:55:15.620972 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.620979 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.620985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-01 22:55:15.620991 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.620997 | orchestrator | 2025-06-01 22:55:15.621008 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-01 22:55:15.621014 | orchestrator | Sunday 01 June 2025 22:54:11 +0000 (0:00:01.088) 0:05:28.739 *********** 2025-06-01 22:55:15.621020 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.621026 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.621032 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.621038 | orchestrator | 2025-06-01 22:55:15.621044 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-01 22:55:15.621051 | orchestrator | Sunday 01 June 2025 22:54:12 +0000 (0:00:00.420) 0:05:29.160 *********** 2025-06-01 22:55:15.621057 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.621063 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.621069 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.621075 | orchestrator | 2025-06-01 22:55:15.621081 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-01 22:55:15.621091 | orchestrator | Sunday 01 June 2025 22:54:13 +0000 (0:00:01.424) 0:05:30.584 *********** 2025-06-01 22:55:15.621100 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:55:15.621107 | orchestrator | 2025-06-01 22:55:15.621113 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-01 22:55:15.621119 | orchestrator | Sunday 01 June 2025 22:54:15 +0000 (0:00:01.775) 0:05:32.359 *********** 2025-06-01 22:55:15.621125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.621132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.621139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.621150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.621165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.621172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-01 22:55:15.621178 | orchestrator | 2025-06-01 22:55:15.621185 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-01 22:55:15.621191 | orchestrator | Sunday 01 June 2025 22:54:21 +0000 (0:00:06.200) 0:05:38.560 *********** 2025-06-01 22:55:15.621197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.621208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.621215 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.621229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.621236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.621242 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.621249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.621259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-01 22:55:15.621266 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.621272 | orchestrator | 2025-06-01 22:55:15.621278 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-01 22:55:15.621285 | orchestrator | Sunday 01 June 2025 22:54:22 +0000 (0:00:00.623) 0:05:39.184 *********** 2025-06-01 22:55:15.621291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 22:55:15.621305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 22:55:15.621312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 22:55:15.621318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 22:55:15.621324 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.621331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 22:55:15.621337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 22:55:15.621343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 22:55:15.621350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 22:55:15.621356 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.621363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 22:55:15.621369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 22:55:15.621381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 22:55:15.621387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 22:55:15.621393 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.621400 | orchestrator | 2025-06-01 22:55:15.621406 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-01 22:55:15.621412 | orchestrator | Sunday 01 June 2025 22:54:23 +0000 (0:00:01.655) 0:05:40.839 *********** 2025-06-01 22:55:15.621418 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.621424 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.621430 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.621436 | orchestrator | 2025-06-01 22:55:15.621443 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-01 22:55:15.621449 | orchestrator | Sunday 01 June 2025 22:54:25 +0000 (0:00:01.286) 0:05:42.125 *********** 2025-06-01 22:55:15.621455 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.621461 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.621467 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.621473 | orchestrator | 2025-06-01 22:55:15.621480 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-01 22:55:15.621486 | orchestrator | Sunday 01 June 2025 22:54:27 +0000 (0:00:02.177) 0:05:44.303 *********** 2025-06-01 22:55:15.621492 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.621498 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.621504 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.621510 | orchestrator | 2025-06-01 22:55:15.621517 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-01 22:55:15.621523 | orchestrator | Sunday 01 June 2025 22:54:27 +0000 (0:00:00.322) 0:05:44.625 *********** 2025-06-01 22:55:15.621529 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.621535 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.621541 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.621547 | orchestrator | 2025-06-01 22:55:15.621553 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-01 22:55:15.621559 | orchestrator | Sunday 01 June 2025 22:54:27 +0000 (0:00:00.289) 0:05:44.914 *********** 2025-06-01 22:55:15.621566 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.621571 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.621578 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.621584 | orchestrator | 2025-06-01 22:55:15.621590 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-01 22:55:15.621596 | orchestrator | Sunday 01 June 2025 22:54:28 +0000 (0:00:00.660) 0:05:45.574 *********** 2025-06-01 22:55:15.621606 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.621612 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.621622 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.621628 | orchestrator | 2025-06-01 22:55:15.621634 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-01 22:55:15.621640 | orchestrator | Sunday 01 June 2025 22:54:28 +0000 (0:00:00.317) 0:05:45.892 *********** 2025-06-01 22:55:15.621647 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.621653 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.621659 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.621677 | orchestrator | 2025-06-01 22:55:15.621683 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-01 22:55:15.621689 | orchestrator | Sunday 01 June 2025 22:54:29 +0000 (0:00:00.312) 0:05:46.205 *********** 2025-06-01 22:55:15.621700 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.621706 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.621713 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.621719 | orchestrator | 2025-06-01 22:55:15.621725 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-01 22:55:15.621731 | orchestrator | Sunday 01 June 2025 22:54:30 +0000 (0:00:00.935) 0:05:47.141 *********** 2025-06-01 22:55:15.621737 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.621743 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.621750 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.621756 | orchestrator | 2025-06-01 22:55:15.621762 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-01 22:55:15.621768 | orchestrator | Sunday 01 June 2025 22:54:30 +0000 (0:00:00.651) 0:05:47.792 *********** 2025-06-01 22:55:15.621774 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.621780 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.621786 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.621792 | orchestrator | 2025-06-01 22:55:15.621799 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-01 22:55:15.621805 | orchestrator | Sunday 01 June 2025 22:54:31 +0000 (0:00:00.344) 0:05:48.136 *********** 2025-06-01 22:55:15.621811 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.621817 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.621823 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.621829 | orchestrator | 2025-06-01 22:55:15.621835 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-01 22:55:15.621841 | orchestrator | Sunday 01 June 2025 22:54:31 +0000 (0:00:00.854) 0:05:48.991 *********** 2025-06-01 22:55:15.621847 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.621853 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.621859 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.621865 | orchestrator | 2025-06-01 22:55:15.621872 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-01 22:55:15.621878 | orchestrator | Sunday 01 June 2025 22:54:33 +0000 (0:00:01.265) 0:05:50.257 *********** 2025-06-01 22:55:15.621884 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.621890 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.621896 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.621902 | orchestrator | 2025-06-01 22:55:15.621908 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-01 22:55:15.621915 | orchestrator | Sunday 01 June 2025 22:54:34 +0000 (0:00:00.882) 0:05:51.140 *********** 2025-06-01 22:55:15.621921 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.621927 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.621933 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.621939 | orchestrator | 2025-06-01 22:55:15.621945 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-01 22:55:15.621951 | orchestrator | Sunday 01 June 2025 22:54:42 +0000 (0:00:08.298) 0:05:59.438 *********** 2025-06-01 22:55:15.621957 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.621963 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.621969 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.621975 | orchestrator | 2025-06-01 22:55:15.621982 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-01 22:55:15.621988 | orchestrator | Sunday 01 June 2025 22:54:43 +0000 (0:00:00.736) 0:06:00.175 *********** 2025-06-01 22:55:15.621994 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.622000 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.622006 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.622012 | orchestrator | 2025-06-01 22:55:15.622041 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-01 22:55:15.622047 | orchestrator | Sunday 01 June 2025 22:54:56 +0000 (0:00:13.886) 0:06:14.062 *********** 2025-06-01 22:55:15.622054 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.622060 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.622066 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.622077 | orchestrator | 2025-06-01 22:55:15.622083 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-01 22:55:15.622090 | orchestrator | Sunday 01 June 2025 22:54:57 +0000 (0:00:00.763) 0:06:14.825 *********** 2025-06-01 22:55:15.622096 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:55:15.622102 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:55:15.622108 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:55:15.622114 | orchestrator | 2025-06-01 22:55:15.622120 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-01 22:55:15.622126 | orchestrator | Sunday 01 June 2025 22:55:02 +0000 (0:00:04.609) 0:06:19.435 *********** 2025-06-01 22:55:15.622132 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.622139 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.622145 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.622151 | orchestrator | 2025-06-01 22:55:15.622157 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-01 22:55:15.622163 | orchestrator | Sunday 01 June 2025 22:55:02 +0000 (0:00:00.361) 0:06:19.797 *********** 2025-06-01 22:55:15.622169 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.622175 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.622181 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.622187 | orchestrator | 2025-06-01 22:55:15.622194 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-01 22:55:15.622200 | orchestrator | Sunday 01 June 2025 22:55:03 +0000 (0:00:00.948) 0:06:20.746 *********** 2025-06-01 22:55:15.622206 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.622212 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.622224 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.622230 | orchestrator | 2025-06-01 22:55:15.622242 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-01 22:55:15.622248 | orchestrator | Sunday 01 June 2025 22:55:04 +0000 (0:00:00.369) 0:06:21.115 *********** 2025-06-01 22:55:15.622254 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.622261 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.622267 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.622273 | orchestrator | 2025-06-01 22:55:15.622279 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-01 22:55:15.622285 | orchestrator | Sunday 01 June 2025 22:55:04 +0000 (0:00:00.432) 0:06:21.548 *********** 2025-06-01 22:55:15.622291 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.622298 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.622304 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.622310 | orchestrator | 2025-06-01 22:55:15.622316 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-01 22:55:15.622322 | orchestrator | Sunday 01 June 2025 22:55:04 +0000 (0:00:00.406) 0:06:21.955 *********** 2025-06-01 22:55:15.622328 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:55:15.622334 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:55:15.622341 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:55:15.622347 | orchestrator | 2025-06-01 22:55:15.622353 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-01 22:55:15.622359 | orchestrator | Sunday 01 June 2025 22:55:05 +0000 (0:00:00.926) 0:06:22.881 *********** 2025-06-01 22:55:15.622365 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.622371 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.622377 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.622383 | orchestrator | 2025-06-01 22:55:15.622390 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-01 22:55:15.622396 | orchestrator | Sunday 01 June 2025 22:55:10 +0000 (0:00:04.852) 0:06:27.733 *********** 2025-06-01 22:55:15.622402 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:55:15.622408 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:55:15.622414 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:55:15.622420 | orchestrator | 2025-06-01 22:55:15.622426 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:55:15.622437 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-01 22:55:15.622444 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-01 22:55:15.622450 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-01 22:55:15.622457 | orchestrator | 2025-06-01 22:55:15.622463 | orchestrator | 2025-06-01 22:55:15.622469 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:55:15.622475 | orchestrator | Sunday 01 June 2025 22:55:11 +0000 (0:00:00.813) 0:06:28.547 *********** 2025-06-01 22:55:15.622481 | orchestrator | =============================================================================== 2025-06-01 22:55:15.622488 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.89s 2025-06-01 22:55:15.622494 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.30s 2025-06-01 22:55:15.622500 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.23s 2025-06-01 22:55:15.622506 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.20s 2025-06-01 22:55:15.622512 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.09s 2025-06-01 22:55:15.622518 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.75s 2025-06-01 22:55:15.622524 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.21s 2025-06-01 22:55:15.622531 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.95s 2025-06-01 22:55:15.622537 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.92s 2025-06-01 22:55:15.622543 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.85s 2025-06-01 22:55:15.622549 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.61s 2025-06-01 22:55:15.622555 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.61s 2025-06-01 22:55:15.622561 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.47s 2025-06-01 22:55:15.622567 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.32s 2025-06-01 22:55:15.622573 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.26s 2025-06-01 22:55:15.622580 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.19s 2025-06-01 22:55:15.622586 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.16s 2025-06-01 22:55:15.622592 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.78s 2025-06-01 22:55:15.622598 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.71s 2025-06-01 22:55:15.622604 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.58s 2025-06-01 22:55:15.622611 | orchestrator | 2025-06-01 22:55:15 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:15.622617 | orchestrator | 2025-06-01 22:55:15 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:15.622626 | orchestrator | 2025-06-01 22:55:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:18.660651 | orchestrator | 2025-06-01 22:55:18 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:18.661029 | orchestrator | 2025-06-01 22:55:18 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:18.662160 | orchestrator | 2025-06-01 22:55:18 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:18.662212 | orchestrator | 2025-06-01 22:55:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:21.712177 | orchestrator | 2025-06-01 22:55:21 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:21.712867 | orchestrator | 2025-06-01 22:55:21 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:21.715246 | orchestrator | 2025-06-01 22:55:21 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:21.715638 | orchestrator | 2025-06-01 22:55:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:24.754198 | orchestrator | 2025-06-01 22:55:24 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:24.756850 | orchestrator | 2025-06-01 22:55:24 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:24.760445 | orchestrator | 2025-06-01 22:55:24 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:24.760829 | orchestrator | 2025-06-01 22:55:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:27.798595 | orchestrator | 2025-06-01 22:55:27 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:27.800444 | orchestrator | 2025-06-01 22:55:27 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:27.802402 | orchestrator | 2025-06-01 22:55:27 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:27.802425 | orchestrator | 2025-06-01 22:55:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:30.841173 | orchestrator | 2025-06-01 22:55:30 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:30.842554 | orchestrator | 2025-06-01 22:55:30 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:30.842712 | orchestrator | 2025-06-01 22:55:30 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:30.842732 | orchestrator | 2025-06-01 22:55:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:33.890907 | orchestrator | 2025-06-01 22:55:33 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:33.894080 | orchestrator | 2025-06-01 22:55:33 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:33.895023 | orchestrator | 2025-06-01 22:55:33 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:33.896491 | orchestrator | 2025-06-01 22:55:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:36.942074 | orchestrator | 2025-06-01 22:55:36 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:36.944426 | orchestrator | 2025-06-01 22:55:36 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:36.948175 | orchestrator | 2025-06-01 22:55:36 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:36.948224 | orchestrator | 2025-06-01 22:55:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:39.994227 | orchestrator | 2025-06-01 22:55:39 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:39.995294 | orchestrator | 2025-06-01 22:55:39 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:39.995768 | orchestrator | 2025-06-01 22:55:39 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:39.995844 | orchestrator | 2025-06-01 22:55:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:43.056930 | orchestrator | 2025-06-01 22:55:43 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:43.058840 | orchestrator | 2025-06-01 22:55:43 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:43.060805 | orchestrator | 2025-06-01 22:55:43 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:43.060884 | orchestrator | 2025-06-01 22:55:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:46.112499 | orchestrator | 2025-06-01 22:55:46 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:46.115227 | orchestrator | 2025-06-01 22:55:46 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:46.117004 | orchestrator | 2025-06-01 22:55:46 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:46.117395 | orchestrator | 2025-06-01 22:55:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:49.165178 | orchestrator | 2025-06-01 22:55:49 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:49.167973 | orchestrator | 2025-06-01 22:55:49 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:49.169819 | orchestrator | 2025-06-01 22:55:49 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:49.169858 | orchestrator | 2025-06-01 22:55:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:52.211245 | orchestrator | 2025-06-01 22:55:52 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:52.211356 | orchestrator | 2025-06-01 22:55:52 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:52.212610 | orchestrator | 2025-06-01 22:55:52 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:52.212770 | orchestrator | 2025-06-01 22:55:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:55.263837 | orchestrator | 2025-06-01 22:55:55 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:55.264384 | orchestrator | 2025-06-01 22:55:55 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:55.269798 | orchestrator | 2025-06-01 22:55:55 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:55.269831 | orchestrator | 2025-06-01 22:55:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:55:58.335889 | orchestrator | 2025-06-01 22:55:58 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:55:58.336876 | orchestrator | 2025-06-01 22:55:58 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:55:58.338388 | orchestrator | 2025-06-01 22:55:58 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:55:58.338539 | orchestrator | 2025-06-01 22:55:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:01.387519 | orchestrator | 2025-06-01 22:56:01 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:01.388646 | orchestrator | 2025-06-01 22:56:01 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:01.390366 | orchestrator | 2025-06-01 22:56:01 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:01.390395 | orchestrator | 2025-06-01 22:56:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:04.433267 | orchestrator | 2025-06-01 22:56:04 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:04.434962 | orchestrator | 2025-06-01 22:56:04 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:04.435712 | orchestrator | 2025-06-01 22:56:04 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:04.436114 | orchestrator | 2025-06-01 22:56:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:07.482288 | orchestrator | 2025-06-01 22:56:07 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:07.485242 | orchestrator | 2025-06-01 22:56:07 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:07.492618 | orchestrator | 2025-06-01 22:56:07 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:07.492651 | orchestrator | 2025-06-01 22:56:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:10.539033 | orchestrator | 2025-06-01 22:56:10 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:10.540037 | orchestrator | 2025-06-01 22:56:10 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:10.541690 | orchestrator | 2025-06-01 22:56:10 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:10.541825 | orchestrator | 2025-06-01 22:56:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:13.586965 | orchestrator | 2025-06-01 22:56:13 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:13.588623 | orchestrator | 2025-06-01 22:56:13 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:13.590221 | orchestrator | 2025-06-01 22:56:13 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:13.590252 | orchestrator | 2025-06-01 22:56:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:16.635998 | orchestrator | 2025-06-01 22:56:16 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:16.637501 | orchestrator | 2025-06-01 22:56:16 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:16.639228 | orchestrator | 2025-06-01 22:56:16 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:16.639256 | orchestrator | 2025-06-01 22:56:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:19.684283 | orchestrator | 2025-06-01 22:56:19 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:19.692104 | orchestrator | 2025-06-01 22:56:19 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:19.694484 | orchestrator | 2025-06-01 22:56:19 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:19.694763 | orchestrator | 2025-06-01 22:56:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:22.739189 | orchestrator | 2025-06-01 22:56:22 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:22.742332 | orchestrator | 2025-06-01 22:56:22 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:22.743928 | orchestrator | 2025-06-01 22:56:22 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:22.743957 | orchestrator | 2025-06-01 22:56:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:25.807973 | orchestrator | 2025-06-01 22:56:25 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:25.810461 | orchestrator | 2025-06-01 22:56:25 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:25.812541 | orchestrator | 2025-06-01 22:56:25 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:25.812567 | orchestrator | 2025-06-01 22:56:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:28.857587 | orchestrator | 2025-06-01 22:56:28 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:28.859627 | orchestrator | 2025-06-01 22:56:28 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:28.862232 | orchestrator | 2025-06-01 22:56:28 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:28.862266 | orchestrator | 2025-06-01 22:56:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:31.914463 | orchestrator | 2025-06-01 22:56:31 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:31.916077 | orchestrator | 2025-06-01 22:56:31 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:31.918409 | orchestrator | 2025-06-01 22:56:31 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:31.918454 | orchestrator | 2025-06-01 22:56:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:34.969276 | orchestrator | 2025-06-01 22:56:34 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:34.972169 | orchestrator | 2025-06-01 22:56:34 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:34.974362 | orchestrator | 2025-06-01 22:56:34 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:34.974745 | orchestrator | 2025-06-01 22:56:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:38.030943 | orchestrator | 2025-06-01 22:56:38 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:38.035223 | orchestrator | 2025-06-01 22:56:38 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:38.038549 | orchestrator | 2025-06-01 22:56:38 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:38.039229 | orchestrator | 2025-06-01 22:56:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:41.098261 | orchestrator | 2025-06-01 22:56:41 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:41.100154 | orchestrator | 2025-06-01 22:56:41 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:41.102995 | orchestrator | 2025-06-01 22:56:41 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:41.103112 | orchestrator | 2025-06-01 22:56:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:44.174793 | orchestrator | 2025-06-01 22:56:44 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:44.175888 | orchestrator | 2025-06-01 22:56:44 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:44.177762 | orchestrator | 2025-06-01 22:56:44 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:44.177795 | orchestrator | 2025-06-01 22:56:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:47.220636 | orchestrator | 2025-06-01 22:56:47 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:47.224294 | orchestrator | 2025-06-01 22:56:47 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:47.224341 | orchestrator | 2025-06-01 22:56:47 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:47.224348 | orchestrator | 2025-06-01 22:56:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:50.274999 | orchestrator | 2025-06-01 22:56:50 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:50.276528 | orchestrator | 2025-06-01 22:56:50 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:50.277770 | orchestrator | 2025-06-01 22:56:50 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:50.278232 | orchestrator | 2025-06-01 22:56:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:53.318369 | orchestrator | 2025-06-01 22:56:53 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:53.319378 | orchestrator | 2025-06-01 22:56:53 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:53.321521 | orchestrator | 2025-06-01 22:56:53 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:53.321560 | orchestrator | 2025-06-01 22:56:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:56.372789 | orchestrator | 2025-06-01 22:56:56 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:56.376041 | orchestrator | 2025-06-01 22:56:56 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state STARTED 2025-06-01 22:56:56.376693 | orchestrator | 2025-06-01 22:56:56 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:56.376774 | orchestrator | 2025-06-01 22:56:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:56:59.416869 | orchestrator | 2025-06-01 22:56:59 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:56:59.419212 | orchestrator | 2025-06-01 22:56:59 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:56:59.424767 | orchestrator | 2025-06-01 22:56:59 | INFO  | Task 47feef1f-f75c-4ea3-b029-a912a0427f19 is in state SUCCESS 2025-06-01 22:56:59.427144 | orchestrator | 2025-06-01 22:56:59.427217 | orchestrator | 2025-06-01 22:56:59.427231 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-01 22:56:59.427243 | orchestrator | 2025-06-01 22:56:59.427255 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-01 22:56:59.427266 | orchestrator | Sunday 01 June 2025 22:46:15 +0000 (0:00:00.775) 0:00:00.775 *********** 2025-06-01 22:56:59.427279 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.427291 | orchestrator | 2025-06-01 22:56:59.427303 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-01 22:56:59.427314 | orchestrator | Sunday 01 June 2025 22:46:16 +0000 (0:00:01.083) 0:00:01.859 *********** 2025-06-01 22:56:59.427325 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.427337 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.427348 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.427359 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.427370 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.427380 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.427391 | orchestrator | 2025-06-01 22:56:59.427402 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-01 22:56:59.427413 | orchestrator | Sunday 01 June 2025 22:46:18 +0000 (0:00:01.416) 0:00:03.276 *********** 2025-06-01 22:56:59.427424 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.427435 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.427471 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.427635 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.427651 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.427686 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.427757 | orchestrator | 2025-06-01 22:56:59.427771 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-01 22:56:59.427784 | orchestrator | Sunday 01 June 2025 22:46:19 +0000 (0:00:00.798) 0:00:04.074 *********** 2025-06-01 22:56:59.427797 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.427810 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.427823 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.427835 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.427848 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.427860 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.427872 | orchestrator | 2025-06-01 22:56:59.427886 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-01 22:56:59.427898 | orchestrator | Sunday 01 June 2025 22:46:20 +0000 (0:00:00.953) 0:00:05.028 *********** 2025-06-01 22:56:59.427910 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.427923 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.427935 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.427948 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.427960 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.427972 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.427984 | orchestrator | 2025-06-01 22:56:59.427997 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-01 22:56:59.428010 | orchestrator | Sunday 01 June 2025 22:46:20 +0000 (0:00:00.748) 0:00:05.776 *********** 2025-06-01 22:56:59.428022 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.428035 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.428047 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.428059 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.428071 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.428084 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.428096 | orchestrator | 2025-06-01 22:56:59.428108 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-01 22:56:59.428119 | orchestrator | Sunday 01 June 2025 22:46:21 +0000 (0:00:00.529) 0:00:06.306 *********** 2025-06-01 22:56:59.428130 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.428140 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.428151 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.428162 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.428173 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.428184 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.428195 | orchestrator | 2025-06-01 22:56:59.428236 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-01 22:56:59.428272 | orchestrator | Sunday 01 June 2025 22:46:22 +0000 (0:00:00.830) 0:00:07.137 *********** 2025-06-01 22:56:59.428283 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.428337 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.428348 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.428359 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.428370 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.428380 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.428391 | orchestrator | 2025-06-01 22:56:59.428402 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-01 22:56:59.428443 | orchestrator | Sunday 01 June 2025 22:46:23 +0000 (0:00:00.742) 0:00:07.879 *********** 2025-06-01 22:56:59.428455 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.428489 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.428500 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.428511 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.428602 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.428613 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.428624 | orchestrator | 2025-06-01 22:56:59.428689 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-01 22:56:59.428715 | orchestrator | Sunday 01 June 2025 22:46:23 +0000 (0:00:00.955) 0:00:08.834 *********** 2025-06-01 22:56:59.428727 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 22:56:59.428738 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 22:56:59.428749 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 22:56:59.428760 | orchestrator | 2025-06-01 22:56:59.428771 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-01 22:56:59.428782 | orchestrator | Sunday 01 June 2025 22:46:24 +0000 (0:00:00.908) 0:00:09.743 *********** 2025-06-01 22:56:59.428793 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.428804 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.428814 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.428825 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.428835 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.428846 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.428857 | orchestrator | 2025-06-01 22:56:59.428881 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-01 22:56:59.428893 | orchestrator | Sunday 01 June 2025 22:46:26 +0000 (0:00:01.223) 0:00:10.966 *********** 2025-06-01 22:56:59.428904 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 22:56:59.428915 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 22:56:59.428926 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 22:56:59.428936 | orchestrator | 2025-06-01 22:56:59.428947 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-01 22:56:59.428958 | orchestrator | Sunday 01 June 2025 22:46:29 +0000 (0:00:02.997) 0:00:13.963 *********** 2025-06-01 22:56:59.428969 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 22:56:59.428980 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 22:56:59.428990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 22:56:59.429001 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.429012 | orchestrator | 2025-06-01 22:56:59.429023 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-01 22:56:59.429034 | orchestrator | Sunday 01 June 2025 22:46:29 +0000 (0:00:00.729) 0:00:14.693 *********** 2025-06-01 22:56:59.429053 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.429068 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.429079 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.429121 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.429134 | orchestrator | 2025-06-01 22:56:59.429145 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-01 22:56:59.429204 | orchestrator | Sunday 01 June 2025 22:46:30 +0000 (0:00:00.793) 0:00:15.486 *********** 2025-06-01 22:56:59.429219 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.429242 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.429253 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.429297 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.429309 | orchestrator | 2025-06-01 22:56:59.429320 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-01 22:56:59.429331 | orchestrator | Sunday 01 June 2025 22:46:31 +0000 (0:00:00.452) 0:00:15.939 *********** 2025-06-01 22:56:59.429345 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-01 22:46:26.929405', 'end': '2025-06-01 22:46:27.190264', 'delta': '0:00:00.260859', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.429368 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-01 22:46:27.895995', 'end': '2025-06-01 22:46:28.155674', 'delta': '0:00:00.259679', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.429386 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-01 22:46:28.630128', 'end': '2025-06-01 22:46:28.898330', 'delta': '0:00:00.268202', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.429398 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.429409 | orchestrator | 2025-06-01 22:56:59.429420 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-01 22:56:59.429431 | orchestrator | Sunday 01 June 2025 22:46:31 +0000 (0:00:00.369) 0:00:16.309 *********** 2025-06-01 22:56:59.429442 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.429453 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.429464 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.429475 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.429486 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.429504 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.429545 | orchestrator | 2025-06-01 22:56:59.429558 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-01 22:56:59.429570 | orchestrator | Sunday 01 June 2025 22:46:33 +0000 (0:00:01.667) 0:00:17.976 *********** 2025-06-01 22:56:59.429581 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.429592 | orchestrator | 2025-06-01 22:56:59.429739 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-01 22:56:59.429752 | orchestrator | Sunday 01 June 2025 22:46:33 +0000 (0:00:00.797) 0:00:18.773 *********** 2025-06-01 22:56:59.429763 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.429774 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.429785 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.429796 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.429807 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.429846 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.429858 | orchestrator | 2025-06-01 22:56:59.429869 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-01 22:56:59.429922 | orchestrator | Sunday 01 June 2025 22:46:35 +0000 (0:00:01.485) 0:00:20.258 *********** 2025-06-01 22:56:59.429935 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.429946 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.429957 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.429968 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.429978 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.429989 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.430000 | orchestrator | 2025-06-01 22:56:59.430011 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-01 22:56:59.430071 | orchestrator | Sunday 01 June 2025 22:46:36 +0000 (0:00:01.128) 0:00:21.387 *********** 2025-06-01 22:56:59.430083 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.430094 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.430105 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.430116 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.430127 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.430138 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.430171 | orchestrator | 2025-06-01 22:56:59.430185 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-01 22:56:59.430196 | orchestrator | Sunday 01 June 2025 22:46:37 +0000 (0:00:01.200) 0:00:22.587 *********** 2025-06-01 22:56:59.430206 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.430217 | orchestrator | 2025-06-01 22:56:59.430228 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-01 22:56:59.430239 | orchestrator | Sunday 01 June 2025 22:46:37 +0000 (0:00:00.198) 0:00:22.786 *********** 2025-06-01 22:56:59.430262 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.430273 | orchestrator | 2025-06-01 22:56:59.430284 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-01 22:56:59.430318 | orchestrator | Sunday 01 June 2025 22:46:38 +0000 (0:00:00.212) 0:00:22.999 *********** 2025-06-01 22:56:59.430329 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.430340 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.430404 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.430416 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.430427 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.430438 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.430508 | orchestrator | 2025-06-01 22:56:59.430519 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-01 22:56:59.430539 | orchestrator | Sunday 01 June 2025 22:46:39 +0000 (0:00:00.885) 0:00:23.885 *********** 2025-06-01 22:56:59.430550 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.430628 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.430639 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.430659 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.430712 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.430723 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.430734 | orchestrator | 2025-06-01 22:56:59.430745 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-01 22:56:59.430756 | orchestrator | Sunday 01 June 2025 22:46:40 +0000 (0:00:01.129) 0:00:25.015 *********** 2025-06-01 22:56:59.430767 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.430778 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.430788 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.430799 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.430810 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.430820 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.430831 | orchestrator | 2025-06-01 22:56:59.430842 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-01 22:56:59.430853 | orchestrator | Sunday 01 June 2025 22:46:41 +0000 (0:00:01.010) 0:00:26.025 *********** 2025-06-01 22:56:59.430864 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.430890 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.430919 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.430931 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.430941 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.430959 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.430970 | orchestrator | 2025-06-01 22:56:59.430981 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-01 22:56:59.430992 | orchestrator | Sunday 01 June 2025 22:46:42 +0000 (0:00:01.193) 0:00:27.219 *********** 2025-06-01 22:56:59.431003 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.431013 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.431024 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.431035 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.431046 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.431056 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.431067 | orchestrator | 2025-06-01 22:56:59.431078 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-01 22:56:59.431088 | orchestrator | Sunday 01 June 2025 22:46:43 +0000 (0:00:00.820) 0:00:28.039 *********** 2025-06-01 22:56:59.431099 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.431110 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.431121 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.431131 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.431142 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.431153 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.431164 | orchestrator | 2025-06-01 22:56:59.431175 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-01 22:56:59.431186 | orchestrator | Sunday 01 June 2025 22:46:44 +0000 (0:00:01.009) 0:00:29.049 *********** 2025-06-01 22:56:59.431196 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.431207 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.431218 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.431228 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.431239 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.431250 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.431261 | orchestrator | 2025-06-01 22:56:59.431271 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-01 22:56:59.431282 | orchestrator | Sunday 01 June 2025 22:46:45 +0000 (0:00:00.848) 0:00:29.898 *********** 2025-06-01 22:56:59.431294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e', 'scsi-SQEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.431455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-07-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.431469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d', 'scsi-SQEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.431606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.431625 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.431637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.431795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432204 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.432244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a', 'scsi-SQEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part1', 'scsi-SQEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part14', 'scsi-SQEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part15', 'scsi-SQEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part16', 'scsi-SQEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.432332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--836f126b--3930--552c--8c28--37312a7074e3-osd--block--836f126b--3930--552c--8c28--37312a7074e3', 'dm-uuid-LVM-029Jp1Ec1ULGPT7VpQK8wuergGsAbmtCVfdLVCxb40tL0wN6DtrXRi9tfiPA9NoF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-07-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.432366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04cd8323--667e--5571--83c4--b35d38a67016-osd--block--04cd8323--667e--5571--83c4--b35d38a67016', 'dm-uuid-LVM-XlZok0vJhac7G4DhhcTcFFzSL9VflUk62og1cc2KuwGLzOFTDHfpzhcEqMoT7nvt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432409 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.432421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part1', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part14', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part15', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part16', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.432527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--836f126b--3930--552c--8c28--37312a7074e3-osd--block--836f126b--3930--552c--8c28--37312a7074e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Y2DTDu-OqzU-iwrS-q9VQ-sl0t-PCaj-8TQ9zT', 'scsi-0QEMU_QEMU_HARDDISK_e3d9d8cc-8358-4e9f-a548-9ae6b89fa066', 'scsi-SQEMU_QEMU_HARDDISK_e3d9d8cc-8358-4e9f-a548-9ae6b89fa066'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.432548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--656e26cc--5762--5518--9587--501a37b6e3ae-osd--block--656e26cc--5762--5518--9587--501a37b6e3ae', 'dm-uuid-LVM-OsQWKWmb2Eb93srMle6JZEP4p1SzdO066wdVT1A9olADd4xdWe6zSXcfyUaFrVfp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--04cd8323--667e--5571--83c4--b35d38a67016-osd--block--04cd8323--667e--5571--83c4--b35d38a67016'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KL2Xbh-0IGE-VrUs-08nz-BHDo-s58k-swmDrm', 'scsi-0QEMU_QEMU_HARDDISK_bed9961c-b7ee-4957-bf35-2fee53571a5a', 'scsi-SQEMU_QEMU_HARDDISK_bed9961c-b7ee-4957-bf35-2fee53571a5a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.432577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c-osd--block--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c', 'dm-uuid-LVM-WSo20NaMXnuIYmccxZZwFk2dZtNVqfMTwniI3oyA6ruR6ir5smlv2OXr4mCF7x5o'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d04dff8-74fe-4097-ace0-4c437e5e0f9f', 'scsi-SQEMU_QEMU_HARDDISK_6d04dff8-74fe-4097-ace0-4c437e5e0f9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.432610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.432622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432717 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.432728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part1', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part14', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part15', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part16', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.432811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--656e26cc--5762--5518--9587--501a37b6e3ae-osd--block--656e26cc--5762--5518--9587--501a37b6e3ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xfTFaW-LCOM-DyHP-0Emp-Ele7-RXEX-RflTVe', 'scsi-0QEMU_QEMU_HARDDISK_540779ba-6163-469a-a896-cda4c9a0c816', 'scsi-SQEMU_QEMU_HARDDISK_540779ba-6163-469a-a896-cda4c9a0c816'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.432830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c-osd--block--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tORD2o-QVrP-uu4G-yirs-jkTU-DRT3-VNJpfH', 'scsi-0QEMU_QEMU_HARDDISK_a15d8421-e56a-4621-aed8-2eaa8f026081', 'scsi-SQEMU_QEMU_HARDDISK_a15d8421-e56a-4621-aed8-2eaa8f026081'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.432849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83360607--213f--5c54--ae9b--aa580894d048-osd--block--83360607--213f--5c54--ae9b--aa580894d048', 'dm-uuid-LVM-WVOBDT6woFNABVp5fCTIXaegcR0xFT0LuT0F0TMrOmiMe1YaCQo6tWAWVY8SkAxd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4', 'scsi-SQEMU_QEMU_HARDDISK_f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.432872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c033fef4--2688--55e0--9ca7--53dbc156bc4e-osd--block--c033fef4--2688--55e0--9ca7--53dbc156bc4e', 'dm-uuid-LVM-y1Up2sjeVNIrqC866rNW7BXkmCbzvmVfu13dJbP5yR1qydF2fcMgzKn8BOWCDB7t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.432916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432927 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.432958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.432993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.433004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.433016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:56:59.433043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part1', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part14', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part15', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part16', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.433064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--83360607--213f--5c54--ae9b--aa580894d048-osd--block--83360607--213f--5c54--ae9b--aa580894d048'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vW2cIk-WvsG-wUkZ-mgQF-ppuo-9BmW-Squun7', 'scsi-0QEMU_QEMU_HARDDISK_8eb07f49-902f-451e-9ead-836ebd4b9d37', 'scsi-SQEMU_QEMU_HARDDISK_8eb07f49-902f-451e-9ead-836ebd4b9d37'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.433076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c033fef4--2688--55e0--9ca7--53dbc156bc4e-osd--block--c033fef4--2688--55e0--9ca7--53dbc156bc4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AdaWf5-eNop-BZtE-2lYp-DJXQ-9w6f-HpSC7v', 'scsi-0QEMU_QEMU_HARDDISK_44465191-0fa1-4c22-9234-5804ca50669c', 'scsi-SQEMU_QEMU_HARDDISK_44465191-0fa1-4c22-9234-5804ca50669c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.433088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a931087f-71b6-44f2-a559-c8deb4b3c146', 'scsi-SQEMU_QEMU_HARDDISK_a931087f-71b6-44f2-a559-c8deb4b3c146'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.433099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:56:59.433118 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.433129 | orchestrator | 2025-06-01 22:56:59.433142 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-01 22:56:59.433154 | orchestrator | Sunday 01 June 2025 22:46:46 +0000 (0:00:01.215) 0:00:31.113 *********** 2025-06-01 22:56:59.433166 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433189 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433201 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433213 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433224 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433236 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433255 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433278 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433291 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e', 'scsi-SQEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a0f7ee6-8d36-4e2a-a158-b2d707ca7e6e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433310 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-07-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433323 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.433342 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433359 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433370 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433382 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433393 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433405 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433425 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433443 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433460 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d', 'scsi-SQEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_790cd1d9-8172-4ab9-8ca1-308cdf1b1c1d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433473 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433490 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.433510 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433526 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433538 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433549 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433561 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433572 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433597 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433609 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433627 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a', 'scsi-SQEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part1', 'scsi-SQEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part14', 'scsi-SQEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part15', 'scsi-SQEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part16', 'scsi-SQEMU_QEMU_HARDDISK_c48a14af-6166-400d-9965-9cbf579c714a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433640 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-07-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--836f126b--3930--552c--8c28--37312a7074e3-osd--block--836f126b--3930--552c--8c28--37312a7074e3', 'dm-uuid-LVM-029Jp1Ec1ULGPT7VpQK8wuergGsAbmtCVfdLVCxb40tL0wN6DtrXRi9tfiPA9NoF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04cd8323--667e--5571--83c4--b35d38a67016-osd--block--04cd8323--667e--5571--83c4--b35d38a67016', 'dm-uuid-LVM-XlZok0vJhac7G4DhhcTcFFzSL9VflUk62og1cc2KuwGLzOFTDHfpzhcEqMoT7nvt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433712 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433734 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433745 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.433757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433834 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433852 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.433876 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--656e26cc--5762--5518--9587--501a37b6e3ae-osd--block--656e26cc--5762--5518--9587--501a37b6e3ae', 'dm-uuid-LVM-OsQWKWmb2Eb93srMle6JZEP4p1SzdO066wdVT1A9olADd4xdWe6zSXcfyUaFrVfp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part1', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part14', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part15', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part16', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434284 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c-osd--block--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c', 'dm-uuid-LVM-WSo20NaMXnuIYmccxZZwFk2dZtNVqfMTwniI3oyA6ruR6ir5smlv2OXr4mCF7x5o'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434298 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--836f126b--3930--552c--8c28--37312a7074e3-osd--block--836f126b--3930--552c--8c28--37312a7074e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Y2DTDu-OqzU-iwrS-q9VQ-sl0t-PCaj-8TQ9zT', 'scsi-0QEMU_QEMU_HARDDISK_e3d9d8cc-8358-4e9f-a548-9ae6b89fa066', 'scsi-SQEMU_QEMU_HARDDISK_e3d9d8cc-8358-4e9f-a548-9ae6b89fa066'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434310 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434339 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--04cd8323--667e--5571--83c4--b35d38a67016-osd--block--04cd8323--667e--5571--83c4--b35d38a67016'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KL2Xbh-0IGE-VrUs-08nz-BHDo-s58k-swmDrm', 'scsi-0QEMU_QEMU_HARDDISK_bed9961c-b7ee-4957-bf35-2fee53571a5a', 'scsi-SQEMU_QEMU_HARDDISK_bed9961c-b7ee-4957-bf35-2fee53571a5a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434351 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d04dff8-74fe-4097-ace0-4c437e5e0f9f', 'scsi-SQEMU_QEMU_HARDDISK_6d04dff8-74fe-4097-ace0-4c437e5e0f9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434379 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434391 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434409 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434421 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.434438 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434456 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83360607--213f--5c54--ae9b--aa580894d048-osd--block--83360607--213f--5c54--ae9b--aa580894d048', 'dm-uuid-LVM-WVOBDT6woFNABVp5fCTIXaegcR0xFT0LuT0F0TMrOmiMe1YaCQo6tWAWVY8SkAxd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434468 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434479 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c033fef4--2688--55e0--9ca7--53dbc156bc4e-osd--block--c033fef4--2688--55e0--9ca7--53dbc156bc4e', 'dm-uuid-LVM-y1Up2sjeVNIrqC866rNW7BXkmCbzvmVfu13dJbP5yR1qydF2fcMgzKn8BOWCDB7t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434491 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434511 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434529 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434541 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434559 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part1', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part14', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part15', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part16', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434579 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--656e26cc--5762--5518--9587--501a37b6e3ae-osd--block--656e26cc--5762--5518--9587--501a37b6e3ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xfTFaW-LCOM-DyHP-0Emp-Ele7-RXEX-RflTVe', 'scsi-0QEMU_QEMU_HARDDISK_540779ba-6163-469a-a896-cda4c9a0c816', 'scsi-SQEMU_QEMU_HARDDISK_540779ba-6163-469a-a896-cda4c9a0c816'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434613 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434625 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c-osd--block--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tORD2o-QVrP-uu4G-yirs-jkTU-DRT3-VNJpfH', 'scsi-0QEMU_QEMU_HARDDISK_a15d8421-e56a-4621-aed8-2eaa8f026081', 'scsi-SQEMU_QEMU_HARDDISK_a15d8421-e56a-4621-aed8-2eaa8f026081'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434636 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434655 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4', 'scsi-SQEMU_QEMU_HARDDISK_f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434701 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434714 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434735 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434747 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.434758 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434778 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part1', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part14', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part15', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part16', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434806 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--83360607--213f--5c54--ae9b--aa580894d048-osd--block--83360607--213f--5c54--ae9b--aa580894d048'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vW2cIk-WvsG-wUkZ-mgQF-ppuo-9BmW-Squun7', 'scsi-0QEMU_QEMU_HARDDISK_8eb07f49-902f-451e-9ead-836ebd4b9d37', 'scsi-SQEMU_QEMU_HARDDISK_8eb07f49-902f-451e-9ead-836ebd4b9d37'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c033fef4--2688--55e0--9ca7--53dbc156bc4e-osd--block--c033fef4--2688--55e0--9ca7--53dbc156bc4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AdaWf5-eNop-BZtE-2lYp-DJXQ-9w6f-HpSC7v', 'scsi-0QEMU_QEMU_HARDDISK_44465191-0fa1-4c22-9234-5804ca50669c', 'scsi-SQEMU_QEMU_HARDDISK_44465191-0fa1-4c22-9234-5804ca50669c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a931087f-71b6-44f2-a559-c8deb4b3c146', 'scsi-SQEMU_QEMU_HARDDISK_a931087f-71b6-44f2-a559-c8deb4b3c146'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434890 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:56:59.434903 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.434916 | orchestrator | 2025-06-01 22:56:59.434929 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-01 22:56:59.434942 | orchestrator | Sunday 01 June 2025 22:46:48 +0000 (0:00:02.137) 0:00:33.251 *********** 2025-06-01 22:56:59.434955 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.434968 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.434980 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.434999 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.435012 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.435024 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.435037 | orchestrator | 2025-06-01 22:56:59.435050 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-01 22:56:59.435062 | orchestrator | Sunday 01 June 2025 22:46:49 +0000 (0:00:01.229) 0:00:34.480 *********** 2025-06-01 22:56:59.435074 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.435086 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.435098 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.435111 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.435123 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.435136 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.435146 | orchestrator | 2025-06-01 22:56:59.435157 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-01 22:56:59.435168 | orchestrator | Sunday 01 June 2025 22:46:50 +0000 (0:00:00.622) 0:00:35.102 *********** 2025-06-01 22:56:59.435179 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.435190 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.435201 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.435212 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.435223 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.435234 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.435245 | orchestrator | 2025-06-01 22:56:59.435256 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-01 22:56:59.435267 | orchestrator | Sunday 01 June 2025 22:46:51 +0000 (0:00:00.945) 0:00:36.048 *********** 2025-06-01 22:56:59.435278 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.435294 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.435305 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.435323 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.435334 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.435345 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.435356 | orchestrator | 2025-06-01 22:56:59.435367 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-01 22:56:59.435378 | orchestrator | Sunday 01 June 2025 22:46:51 +0000 (0:00:00.605) 0:00:36.653 *********** 2025-06-01 22:56:59.435389 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.435400 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.435411 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.435421 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.435432 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.435443 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.435454 | orchestrator | 2025-06-01 22:56:59.435465 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-01 22:56:59.435476 | orchestrator | Sunday 01 June 2025 22:46:52 +0000 (0:00:00.895) 0:00:37.549 *********** 2025-06-01 22:56:59.435486 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.435497 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.435508 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.435518 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.435529 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.435540 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.435551 | orchestrator | 2025-06-01 22:56:59.435562 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-01 22:56:59.435572 | orchestrator | Sunday 01 June 2025 22:46:53 +0000 (0:00:00.861) 0:00:38.411 *********** 2025-06-01 22:56:59.435583 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 22:56:59.435595 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-01 22:56:59.435605 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-01 22:56:59.435616 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-01 22:56:59.435627 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-01 22:56:59.435638 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-01 22:56:59.435649 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-01 22:56:59.435704 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-01 22:56:59.435718 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-01 22:56:59.435729 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-01 22:56:59.435739 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-01 22:56:59.435750 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-01 22:56:59.435761 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-01 22:56:59.435772 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-01 22:56:59.435782 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-01 22:56:59.435793 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-01 22:56:59.435803 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-01 22:56:59.435814 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-01 22:56:59.435825 | orchestrator | 2025-06-01 22:56:59.435836 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-01 22:56:59.435846 | orchestrator | Sunday 01 June 2025 22:46:57 +0000 (0:00:03.937) 0:00:42.348 *********** 2025-06-01 22:56:59.435857 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 22:56:59.435868 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 22:56:59.435879 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 22:56:59.435889 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.435900 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-01 22:56:59.435911 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-01 22:56:59.435921 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-01 22:56:59.435939 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-01 22:56:59.435950 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-01 22:56:59.435961 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-01 22:56:59.435971 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.435982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-01 22:56:59.435999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-01 22:56:59.436010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-01 22:56:59.436021 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.436032 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-01 22:56:59.436043 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-01 22:56:59.436054 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-01 22:56:59.436065 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.436075 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.436086 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-01 22:56:59.436097 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-01 22:56:59.436107 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-01 22:56:59.436118 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.436128 | orchestrator | 2025-06-01 22:56:59.436139 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-01 22:56:59.436150 | orchestrator | Sunday 01 June 2025 22:46:58 +0000 (0:00:00.653) 0:00:43.002 *********** 2025-06-01 22:56:59.436161 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.436172 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.436182 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.436199 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.436211 | orchestrator | 2025-06-01 22:56:59.436222 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-01 22:56:59.436235 | orchestrator | Sunday 01 June 2025 22:46:59 +0000 (0:00:01.119) 0:00:44.122 *********** 2025-06-01 22:56:59.436246 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.436257 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.436267 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.436278 | orchestrator | 2025-06-01 22:56:59.436289 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-01 22:56:59.436300 | orchestrator | Sunday 01 June 2025 22:46:59 +0000 (0:00:00.282) 0:00:44.405 *********** 2025-06-01 22:56:59.436311 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.436322 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.436332 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.436343 | orchestrator | 2025-06-01 22:56:59.436354 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-01 22:56:59.436365 | orchestrator | Sunday 01 June 2025 22:46:59 +0000 (0:00:00.375) 0:00:44.780 *********** 2025-06-01 22:56:59.436375 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.436386 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.436397 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.436408 | orchestrator | 2025-06-01 22:56:59.436419 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-01 22:56:59.436430 | orchestrator | Sunday 01 June 2025 22:47:00 +0000 (0:00:00.252) 0:00:45.033 *********** 2025-06-01 22:56:59.436441 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.436452 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.436462 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.436473 | orchestrator | 2025-06-01 22:56:59.436484 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-01 22:56:59.436504 | orchestrator | Sunday 01 June 2025 22:47:00 +0000 (0:00:00.637) 0:00:45.670 *********** 2025-06-01 22:56:59.436514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:56:59.436525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:56:59.436536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:56:59.436547 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.436558 | orchestrator | 2025-06-01 22:56:59.436569 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-01 22:56:59.436580 | orchestrator | Sunday 01 June 2025 22:47:01 +0000 (0:00:00.503) 0:00:46.173 *********** 2025-06-01 22:56:59.436590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:56:59.436601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:56:59.436612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:56:59.436622 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.436633 | orchestrator | 2025-06-01 22:56:59.436644 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-01 22:56:59.436655 | orchestrator | Sunday 01 June 2025 22:47:01 +0000 (0:00:00.528) 0:00:46.701 *********** 2025-06-01 22:56:59.436682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:56:59.436693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:56:59.436704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:56:59.436715 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.436725 | orchestrator | 2025-06-01 22:56:59.436736 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-01 22:56:59.436747 | orchestrator | Sunday 01 June 2025 22:47:02 +0000 (0:00:00.693) 0:00:47.395 *********** 2025-06-01 22:56:59.436758 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.436769 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.436780 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.436790 | orchestrator | 2025-06-01 22:56:59.436801 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-01 22:56:59.436812 | orchestrator | Sunday 01 June 2025 22:47:03 +0000 (0:00:00.533) 0:00:47.928 *********** 2025-06-01 22:56:59.436823 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-01 22:56:59.436834 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-01 22:56:59.436845 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-01 22:56:59.436856 | orchestrator | 2025-06-01 22:56:59.436867 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-01 22:56:59.436878 | orchestrator | Sunday 01 June 2025 22:47:03 +0000 (0:00:00.553) 0:00:48.482 *********** 2025-06-01 22:56:59.436894 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 22:56:59.436906 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 22:56:59.436917 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 22:56:59.436928 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-01 22:56:59.436939 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-01 22:56:59.436950 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-01 22:56:59.436960 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-01 22:56:59.436971 | orchestrator | 2025-06-01 22:56:59.436982 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-01 22:56:59.436993 | orchestrator | Sunday 01 June 2025 22:47:04 +0000 (0:00:00.957) 0:00:49.439 *********** 2025-06-01 22:56:59.437004 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 22:56:59.437014 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 22:56:59.437025 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 22:56:59.437049 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-01 22:56:59.437061 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-01 22:56:59.437071 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-01 22:56:59.437082 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-01 22:56:59.437093 | orchestrator | 2025-06-01 22:56:59.437104 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 22:56:59.437114 | orchestrator | Sunday 01 June 2025 22:47:07 +0000 (0:00:02.497) 0:00:51.937 *********** 2025-06-01 22:56:59.437125 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.437138 | orchestrator | 2025-06-01 22:56:59.437149 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 22:56:59.437160 | orchestrator | Sunday 01 June 2025 22:47:08 +0000 (0:00:01.362) 0:00:53.300 *********** 2025-06-01 22:56:59.437171 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.437182 | orchestrator | 2025-06-01 22:56:59.437193 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 22:56:59.437203 | orchestrator | Sunday 01 June 2025 22:47:09 +0000 (0:00:01.372) 0:00:54.673 *********** 2025-06-01 22:56:59.437214 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.437225 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.437236 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.437246 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.437257 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.437268 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.437279 | orchestrator | 2025-06-01 22:56:59.437289 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 22:56:59.437300 | orchestrator | Sunday 01 June 2025 22:47:10 +0000 (0:00:00.962) 0:00:55.635 *********** 2025-06-01 22:56:59.437311 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.437322 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.437333 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.437343 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.437354 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.437365 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.437375 | orchestrator | 2025-06-01 22:56:59.437386 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 22:56:59.437397 | orchestrator | Sunday 01 June 2025 22:47:12 +0000 (0:00:01.665) 0:00:57.301 *********** 2025-06-01 22:56:59.437408 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.437419 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.437429 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.437440 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.437451 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.437461 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.437472 | orchestrator | 2025-06-01 22:56:59.437483 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 22:56:59.437494 | orchestrator | Sunday 01 June 2025 22:47:13 +0000 (0:00:01.140) 0:00:58.442 *********** 2025-06-01 22:56:59.437504 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.437515 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.437526 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.437536 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.437547 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.437558 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.437569 | orchestrator | 2025-06-01 22:56:59.437579 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 22:56:59.437598 | orchestrator | Sunday 01 June 2025 22:47:14 +0000 (0:00:01.099) 0:00:59.541 *********** 2025-06-01 22:56:59.437609 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.437620 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.437630 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.437641 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.437652 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.437679 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.437690 | orchestrator | 2025-06-01 22:56:59.437701 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 22:56:59.437712 | orchestrator | Sunday 01 June 2025 22:47:16 +0000 (0:00:01.382) 0:01:00.924 *********** 2025-06-01 22:56:59.437728 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.437739 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.437750 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.437761 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.437772 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.437782 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.437793 | orchestrator | 2025-06-01 22:56:59.437804 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 22:56:59.437815 | orchestrator | Sunday 01 June 2025 22:47:16 +0000 (0:00:00.579) 0:01:01.504 *********** 2025-06-01 22:56:59.437825 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.437836 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.437847 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.437857 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.437868 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.437878 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.437889 | orchestrator | 2025-06-01 22:56:59.437900 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 22:56:59.437911 | orchestrator | Sunday 01 June 2025 22:47:17 +0000 (0:00:00.963) 0:01:02.467 *********** 2025-06-01 22:56:59.437922 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.437932 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.437943 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.437954 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.437964 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.437975 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.437986 | orchestrator | 2025-06-01 22:56:59.437997 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 22:56:59.438013 | orchestrator | Sunday 01 June 2025 22:47:18 +0000 (0:00:01.212) 0:01:03.679 *********** 2025-06-01 22:56:59.438076 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.438088 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.438099 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.438111 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.438121 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.438132 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.438143 | orchestrator | 2025-06-01 22:56:59.438154 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 22:56:59.438165 | orchestrator | Sunday 01 June 2025 22:47:20 +0000 (0:00:01.209) 0:01:04.889 *********** 2025-06-01 22:56:59.438175 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.438186 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.438197 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.438208 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.438218 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.438229 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.438240 | orchestrator | 2025-06-01 22:56:59.438251 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 22:56:59.438262 | orchestrator | Sunday 01 June 2025 22:47:20 +0000 (0:00:00.651) 0:01:05.541 *********** 2025-06-01 22:56:59.438272 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.438283 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.438302 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.438313 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.438324 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.438335 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.438345 | orchestrator | 2025-06-01 22:56:59.438356 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 22:56:59.438367 | orchestrator | Sunday 01 June 2025 22:47:21 +0000 (0:00:00.862) 0:01:06.403 *********** 2025-06-01 22:56:59.438378 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.438388 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.438399 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.438410 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.438420 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.438431 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.438442 | orchestrator | 2025-06-01 22:56:59.438452 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 22:56:59.438463 | orchestrator | Sunday 01 June 2025 22:47:22 +0000 (0:00:00.648) 0:01:07.052 *********** 2025-06-01 22:56:59.438474 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.438484 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.438495 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.438506 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.438516 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.438527 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.438538 | orchestrator | 2025-06-01 22:56:59.438548 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 22:56:59.438559 | orchestrator | Sunday 01 June 2025 22:47:23 +0000 (0:00:00.907) 0:01:07.959 *********** 2025-06-01 22:56:59.438570 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.438580 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.438591 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.438602 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.438612 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.438623 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.438634 | orchestrator | 2025-06-01 22:56:59.438644 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 22:56:59.438655 | orchestrator | Sunday 01 June 2025 22:47:23 +0000 (0:00:00.755) 0:01:08.715 *********** 2025-06-01 22:56:59.438719 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.438731 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.438742 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.438753 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.438764 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.438775 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.438785 | orchestrator | 2025-06-01 22:56:59.438796 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 22:56:59.438808 | orchestrator | Sunday 01 June 2025 22:47:24 +0000 (0:00:01.142) 0:01:09.857 *********** 2025-06-01 22:56:59.438819 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.438829 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.438840 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.438851 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.438861 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.438872 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.438883 | orchestrator | 2025-06-01 22:56:59.438894 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 22:56:59.438922 | orchestrator | Sunday 01 June 2025 22:47:25 +0000 (0:00:00.591) 0:01:10.448 *********** 2025-06-01 22:56:59.438933 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.438944 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.438955 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.438966 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.438977 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.438987 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.439028 | orchestrator | 2025-06-01 22:56:59.439040 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 22:56:59.439051 | orchestrator | Sunday 01 June 2025 22:47:26 +0000 (0:00:00.795) 0:01:11.244 *********** 2025-06-01 22:56:59.439062 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.439073 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.439084 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.439094 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.439105 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.439116 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.439126 | orchestrator | 2025-06-01 22:56:59.439137 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 22:56:59.439148 | orchestrator | Sunday 01 June 2025 22:47:27 +0000 (0:00:00.636) 0:01:11.881 *********** 2025-06-01 22:56:59.439159 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.439170 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.439180 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.439191 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.439202 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.439212 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.439223 | orchestrator | 2025-06-01 22:56:59.439233 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-01 22:56:59.439252 | orchestrator | Sunday 01 June 2025 22:47:28 +0000 (0:00:01.230) 0:01:13.112 *********** 2025-06-01 22:56:59.439262 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.439272 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.439281 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.439291 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.439300 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.439310 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.439319 | orchestrator | 2025-06-01 22:56:59.439329 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-01 22:56:59.439339 | orchestrator | Sunday 01 June 2025 22:47:29 +0000 (0:00:01.724) 0:01:14.836 *********** 2025-06-01 22:56:59.439348 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.439358 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.439367 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.439377 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.439386 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.439396 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.439405 | orchestrator | 2025-06-01 22:56:59.439415 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-01 22:56:59.439424 | orchestrator | Sunday 01 June 2025 22:47:31 +0000 (0:00:01.874) 0:01:16.711 *********** 2025-06-01 22:56:59.439434 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.439444 | orchestrator | 2025-06-01 22:56:59.439454 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-01 22:56:59.439463 | orchestrator | Sunday 01 June 2025 22:47:33 +0000 (0:00:01.157) 0:01:17.869 *********** 2025-06-01 22:56:59.439473 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.439482 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.439492 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.439501 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.439511 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.439520 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.439530 | orchestrator | 2025-06-01 22:56:59.439539 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-01 22:56:59.439549 | orchestrator | Sunday 01 June 2025 22:47:33 +0000 (0:00:00.776) 0:01:18.645 *********** 2025-06-01 22:56:59.439559 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.439568 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.439577 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.439594 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.439603 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.439613 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.439622 | orchestrator | 2025-06-01 22:56:59.439632 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-01 22:56:59.439641 | orchestrator | Sunday 01 June 2025 22:47:34 +0000 (0:00:00.541) 0:01:19.187 *********** 2025-06-01 22:56:59.439651 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 22:56:59.439681 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 22:56:59.439691 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 22:56:59.439701 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 22:56:59.439710 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 22:56:59.439720 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 22:56:59.439729 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 22:56:59.439739 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 22:56:59.439748 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 22:56:59.439758 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 22:56:59.439767 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 22:56:59.439777 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 22:56:59.439786 | orchestrator | 2025-06-01 22:56:59.439802 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-01 22:56:59.439812 | orchestrator | Sunday 01 June 2025 22:47:35 +0000 (0:00:01.537) 0:01:20.724 *********** 2025-06-01 22:56:59.439821 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.439831 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.439840 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.439850 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.439859 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.439869 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.439878 | orchestrator | 2025-06-01 22:56:59.439888 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-01 22:56:59.439898 | orchestrator | Sunday 01 June 2025 22:47:36 +0000 (0:00:00.856) 0:01:21.581 *********** 2025-06-01 22:56:59.439907 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.439916 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.439926 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.439935 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.439945 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.439954 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.439964 | orchestrator | 2025-06-01 22:56:59.439973 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-01 22:56:59.439983 | orchestrator | Sunday 01 June 2025 22:47:37 +0000 (0:00:00.825) 0:01:22.406 *********** 2025-06-01 22:56:59.439992 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.440002 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.440011 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.440025 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.440035 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.440045 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.440054 | orchestrator | 2025-06-01 22:56:59.440064 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-01 22:56:59.440073 | orchestrator | Sunday 01 June 2025 22:47:38 +0000 (0:00:00.578) 0:01:22.985 *********** 2025-06-01 22:56:59.440089 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.440099 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.440108 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.440118 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.440127 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.440137 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.440146 | orchestrator | 2025-06-01 22:56:59.440155 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-01 22:56:59.440165 | orchestrator | Sunday 01 June 2025 22:47:38 +0000 (0:00:00.791) 0:01:23.777 *********** 2025-06-01 22:56:59.440175 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.440185 | orchestrator | 2025-06-01 22:56:59.440194 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-01 22:56:59.440204 | orchestrator | Sunday 01 June 2025 22:47:40 +0000 (0:00:01.174) 0:01:24.951 *********** 2025-06-01 22:56:59.440213 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.440223 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.440232 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.440242 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.440251 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.440260 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.440270 | orchestrator | 2025-06-01 22:56:59.440279 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-01 22:56:59.440289 | orchestrator | Sunday 01 June 2025 22:48:39 +0000 (0:00:58.927) 0:02:23.879 *********** 2025-06-01 22:56:59.440299 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 22:56:59.440308 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 22:56:59.440318 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 22:56:59.440327 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.440337 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 22:56:59.440346 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 22:56:59.440356 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 22:56:59.440365 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.440374 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 22:56:59.440384 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 22:56:59.440394 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 22:56:59.440403 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.440413 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 22:56:59.440422 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 22:56:59.440431 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 22:56:59.440441 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.440450 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 22:56:59.440460 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 22:56:59.440469 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 22:56:59.440479 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.440488 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 22:56:59.440498 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 22:56:59.440507 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 22:56:59.440522 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.440544 | orchestrator | 2025-06-01 22:56:59.440562 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-01 22:56:59.440577 | orchestrator | Sunday 01 June 2025 22:48:39 +0000 (0:00:00.976) 0:02:24.855 *********** 2025-06-01 22:56:59.440594 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.440611 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.440629 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.440640 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.440649 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.440658 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.440686 | orchestrator | 2025-06-01 22:56:59.440695 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-01 22:56:59.440705 | orchestrator | Sunday 01 June 2025 22:48:40 +0000 (0:00:00.665) 0:02:25.521 *********** 2025-06-01 22:56:59.440715 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.440724 | orchestrator | 2025-06-01 22:56:59.440734 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-01 22:56:59.440743 | orchestrator | Sunday 01 June 2025 22:48:40 +0000 (0:00:00.186) 0:02:25.707 *********** 2025-06-01 22:56:59.440752 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.440762 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.440771 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.440781 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.440790 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.440799 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.440809 | orchestrator | 2025-06-01 22:56:59.440824 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-01 22:56:59.440834 | orchestrator | Sunday 01 June 2025 22:48:41 +0000 (0:00:01.015) 0:02:26.723 *********** 2025-06-01 22:56:59.440844 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.440853 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.440862 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.440872 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.440881 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.440891 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.440900 | orchestrator | 2025-06-01 22:56:59.440909 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-01 22:56:59.440919 | orchestrator | Sunday 01 June 2025 22:48:42 +0000 (0:00:00.698) 0:02:27.421 *********** 2025-06-01 22:56:59.440928 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.440938 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.440947 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.440956 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.440966 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.440975 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.440984 | orchestrator | 2025-06-01 22:56:59.440994 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-01 22:56:59.441003 | orchestrator | Sunday 01 June 2025 22:48:43 +0000 (0:00:00.769) 0:02:28.191 *********** 2025-06-01 22:56:59.441013 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.441022 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.441032 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.441041 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.441051 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.441060 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.441069 | orchestrator | 2025-06-01 22:56:59.441079 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-01 22:56:59.441088 | orchestrator | Sunday 01 June 2025 22:48:45 +0000 (0:00:02.399) 0:02:30.590 *********** 2025-06-01 22:56:59.441098 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.441107 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.441116 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.441126 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.441135 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.441156 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.441166 | orchestrator | 2025-06-01 22:56:59.441175 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-01 22:56:59.441185 | orchestrator | Sunday 01 June 2025 22:48:46 +0000 (0:00:00.768) 0:02:31.359 *********** 2025-06-01 22:56:59.441195 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.441205 | orchestrator | 2025-06-01 22:56:59.441214 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-01 22:56:59.441224 | orchestrator | Sunday 01 June 2025 22:48:47 +0000 (0:00:01.353) 0:02:32.713 *********** 2025-06-01 22:56:59.441233 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.441243 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.441252 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.441261 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.441271 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.441280 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.441289 | orchestrator | 2025-06-01 22:56:59.441299 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-01 22:56:59.441308 | orchestrator | Sunday 01 June 2025 22:48:48 +0000 (0:00:00.934) 0:02:33.647 *********** 2025-06-01 22:56:59.441318 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.441327 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.441336 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.441346 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.441355 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.441365 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.441374 | orchestrator | 2025-06-01 22:56:59.441383 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-01 22:56:59.441393 | orchestrator | Sunday 01 June 2025 22:48:49 +0000 (0:00:01.044) 0:02:34.692 *********** 2025-06-01 22:56:59.441402 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.441412 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.441421 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.441430 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.441440 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.441449 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.441459 | orchestrator | 2025-06-01 22:56:59.441468 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-01 22:56:59.441484 | orchestrator | Sunday 01 June 2025 22:48:50 +0000 (0:00:00.714) 0:02:35.406 *********** 2025-06-01 22:56:59.441494 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.441503 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.441513 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.441522 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.441532 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.441541 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.441551 | orchestrator | 2025-06-01 22:56:59.441560 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-01 22:56:59.441570 | orchestrator | Sunday 01 June 2025 22:48:51 +0000 (0:00:00.867) 0:02:36.274 *********** 2025-06-01 22:56:59.441579 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.441588 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.441598 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.441607 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.441617 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.441626 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.441635 | orchestrator | 2025-06-01 22:56:59.441645 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-01 22:56:59.441654 | orchestrator | Sunday 01 June 2025 22:48:52 +0000 (0:00:00.722) 0:02:36.997 *********** 2025-06-01 22:56:59.441704 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.441722 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.441731 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.441740 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.441755 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.441765 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.441774 | orchestrator | 2025-06-01 22:56:59.441784 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-01 22:56:59.441793 | orchestrator | Sunday 01 June 2025 22:48:52 +0000 (0:00:00.846) 0:02:37.843 *********** 2025-06-01 22:56:59.441803 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.441812 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.441822 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.441831 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.441840 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.441850 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.441859 | orchestrator | 2025-06-01 22:56:59.441869 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-01 22:56:59.441878 | orchestrator | Sunday 01 June 2025 22:48:53 +0000 (0:00:00.494) 0:02:38.338 *********** 2025-06-01 22:56:59.441888 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.441897 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.441907 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.441916 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.441925 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.441935 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.441944 | orchestrator | 2025-06-01 22:56:59.441954 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-01 22:56:59.441963 | orchestrator | Sunday 01 June 2025 22:48:54 +0000 (0:00:01.060) 0:02:39.398 *********** 2025-06-01 22:56:59.441973 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.441982 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.441992 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.442001 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.442011 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.442049 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.442059 | orchestrator | 2025-06-01 22:56:59.442069 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-01 22:56:59.442079 | orchestrator | Sunday 01 June 2025 22:48:55 +0000 (0:00:01.408) 0:02:40.807 *********** 2025-06-01 22:56:59.442088 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.442098 | orchestrator | 2025-06-01 22:56:59.442108 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-01 22:56:59.442117 | orchestrator | Sunday 01 June 2025 22:48:57 +0000 (0:00:01.277) 0:02:42.084 *********** 2025-06-01 22:56:59.442127 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-01 22:56:59.442136 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-01 22:56:59.442146 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-01 22:56:59.442156 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-01 22:56:59.442165 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-01 22:56:59.442175 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-01 22:56:59.442185 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-01 22:56:59.442194 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-01 22:56:59.442204 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-01 22:56:59.442213 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-01 22:56:59.442221 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-01 22:56:59.442228 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-01 22:56:59.442236 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-01 22:56:59.442250 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-01 22:56:59.442258 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-01 22:56:59.442266 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-01 22:56:59.442273 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-01 22:56:59.442281 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-01 22:56:59.442289 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-01 22:56:59.442297 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-01 22:56:59.442305 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-01 22:56:59.442325 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-01 22:56:59.442333 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-01 22:56:59.442341 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-01 22:56:59.442349 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-01 22:56:59.442356 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-01 22:56:59.442364 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-01 22:56:59.442372 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-01 22:56:59.442380 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-01 22:56:59.442387 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-01 22:56:59.442395 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-01 22:56:59.442403 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-01 22:56:59.442410 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-01 22:56:59.442418 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-01 22:56:59.442426 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-01 22:56:59.442434 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-01 22:56:59.442441 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-01 22:56:59.442454 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-01 22:56:59.442461 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-01 22:56:59.442469 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-01 22:56:59.442477 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-01 22:56:59.442485 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-01 22:56:59.442493 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-01 22:56:59.442500 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-01 22:56:59.442508 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-01 22:56:59.442516 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-01 22:56:59.442523 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-01 22:56:59.442531 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 22:56:59.442539 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 22:56:59.442546 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 22:56:59.442554 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-01 22:56:59.442561 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 22:56:59.442569 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 22:56:59.442577 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 22:56:59.442584 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 22:56:59.442597 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 22:56:59.442605 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 22:56:59.442612 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 22:56:59.442620 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 22:56:59.442628 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 22:56:59.442636 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 22:56:59.442643 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 22:56:59.442651 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 22:56:59.442659 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 22:56:59.442680 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 22:56:59.442687 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 22:56:59.442695 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 22:56:59.442703 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 22:56:59.442710 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 22:56:59.442718 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 22:56:59.442726 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 22:56:59.442734 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 22:56:59.442741 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 22:56:59.442749 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 22:56:59.442757 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 22:56:59.442764 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 22:56:59.442772 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 22:56:59.442780 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 22:56:59.442788 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 22:56:59.442801 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 22:56:59.442809 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 22:56:59.442817 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 22:56:59.442825 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-01 22:56:59.442833 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-01 22:56:59.442841 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 22:56:59.442848 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-01 22:56:59.442856 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 22:56:59.442864 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-01 22:56:59.442872 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-01 22:56:59.442879 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-01 22:56:59.442887 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-01 22:56:59.442895 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-01 22:56:59.442903 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-01 22:56:59.442910 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-01 22:56:59.442918 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-01 22:56:59.442930 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-01 22:56:59.442943 | orchestrator | 2025-06-01 22:56:59.442951 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-01 22:56:59.442959 | orchestrator | Sunday 01 June 2025 22:49:03 +0000 (0:00:06.358) 0:02:48.443 *********** 2025-06-01 22:56:59.442967 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.442974 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.442982 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.442990 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.442998 | orchestrator | 2025-06-01 22:56:59.443006 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-01 22:56:59.443014 | orchestrator | Sunday 01 June 2025 22:49:04 +0000 (0:00:01.063) 0:02:49.506 *********** 2025-06-01 22:56:59.443022 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.443030 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.443038 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.443045 | orchestrator | 2025-06-01 22:56:59.443053 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-01 22:56:59.443061 | orchestrator | Sunday 01 June 2025 22:49:05 +0000 (0:00:00.818) 0:02:50.325 *********** 2025-06-01 22:56:59.443069 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.443077 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.443085 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.443092 | orchestrator | 2025-06-01 22:56:59.443100 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-01 22:56:59.443108 | orchestrator | Sunday 01 June 2025 22:49:07 +0000 (0:00:01.694) 0:02:52.020 *********** 2025-06-01 22:56:59.443116 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.443123 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.443131 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.443139 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.443147 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.443155 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.443163 | orchestrator | 2025-06-01 22:56:59.443171 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-01 22:56:59.443178 | orchestrator | Sunday 01 June 2025 22:49:07 +0000 (0:00:00.601) 0:02:52.622 *********** 2025-06-01 22:56:59.443186 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.443194 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.443201 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.443209 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.443217 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.443225 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.443232 | orchestrator | 2025-06-01 22:56:59.443240 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-01 22:56:59.443248 | orchestrator | Sunday 01 June 2025 22:49:08 +0000 (0:00:00.803) 0:02:53.426 *********** 2025-06-01 22:56:59.443255 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.443263 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.443271 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.443279 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.443286 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.443294 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.443308 | orchestrator | 2025-06-01 22:56:59.443316 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-01 22:56:59.443324 | orchestrator | Sunday 01 June 2025 22:49:09 +0000 (0:00:00.824) 0:02:54.250 *********** 2025-06-01 22:56:59.443332 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.443340 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.443353 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.443361 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.443369 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.443376 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.443384 | orchestrator | 2025-06-01 22:56:59.443392 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-01 22:56:59.443400 | orchestrator | Sunday 01 June 2025 22:49:10 +0000 (0:00:00.924) 0:02:55.175 *********** 2025-06-01 22:56:59.443408 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.443416 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.443423 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.443431 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.443438 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.443446 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.443454 | orchestrator | 2025-06-01 22:56:59.443462 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-01 22:56:59.443470 | orchestrator | Sunday 01 June 2025 22:49:11 +0000 (0:00:00.697) 0:02:55.872 *********** 2025-06-01 22:56:59.443478 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.443485 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.443493 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.443501 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.443508 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.443516 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.443524 | orchestrator | 2025-06-01 22:56:59.443531 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-01 22:56:59.443547 | orchestrator | Sunday 01 June 2025 22:49:12 +0000 (0:00:01.047) 0:02:56.919 *********** 2025-06-01 22:56:59.443555 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.443563 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.443570 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.443578 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.443586 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.443594 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.443601 | orchestrator | 2025-06-01 22:56:59.443609 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-01 22:56:59.443617 | orchestrator | Sunday 01 June 2025 22:49:12 +0000 (0:00:00.712) 0:02:57.632 *********** 2025-06-01 22:56:59.443625 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.443633 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.443640 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.443648 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.443656 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.443678 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.443686 | orchestrator | 2025-06-01 22:56:59.443694 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-01 22:56:59.443702 | orchestrator | Sunday 01 June 2025 22:49:13 +0000 (0:00:00.766) 0:02:58.399 *********** 2025-06-01 22:56:59.443710 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.443717 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.443725 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.443733 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.443741 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.443748 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.443756 | orchestrator | 2025-06-01 22:56:59.443764 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-01 22:56:59.443778 | orchestrator | Sunday 01 June 2025 22:49:16 +0000 (0:00:03.279) 0:03:01.679 *********** 2025-06-01 22:56:59.443786 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.443793 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.443801 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.443809 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.443816 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.443824 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.443832 | orchestrator | 2025-06-01 22:56:59.443840 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-01 22:56:59.443847 | orchestrator | Sunday 01 June 2025 22:49:17 +0000 (0:00:00.714) 0:03:02.393 *********** 2025-06-01 22:56:59.443855 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.443863 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.443870 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.443878 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.443886 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.443894 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.443901 | orchestrator | 2025-06-01 22:56:59.443909 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-01 22:56:59.443917 | orchestrator | Sunday 01 June 2025 22:49:18 +0000 (0:00:00.661) 0:03:03.054 *********** 2025-06-01 22:56:59.443925 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.443932 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.443940 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.443948 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.443956 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.443963 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.443971 | orchestrator | 2025-06-01 22:56:59.443979 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-01 22:56:59.443986 | orchestrator | Sunday 01 June 2025 22:49:19 +0000 (0:00:00.865) 0:03:03.920 *********** 2025-06-01 22:56:59.443994 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444002 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.444010 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.444018 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.444025 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.444034 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.444041 | orchestrator | 2025-06-01 22:56:59.444049 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-01 22:56:59.444062 | orchestrator | Sunday 01 June 2025 22:49:19 +0000 (0:00:00.647) 0:03:04.567 *********** 2025-06-01 22:56:59.444070 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444078 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.444085 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.444095 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-01 22:56:59.444103 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-01 22:56:59.444117 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-01 22:56:59.444131 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-01 22:56:59.444139 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.444147 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.444155 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-01 22:56:59.444163 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-01 22:56:59.444171 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.444179 | orchestrator | 2025-06-01 22:56:59.444187 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-01 22:56:59.444195 | orchestrator | Sunday 01 June 2025 22:49:20 +0000 (0:00:01.080) 0:03:05.648 *********** 2025-06-01 22:56:59.444202 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444210 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.444218 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.444226 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.444233 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.444241 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.444249 | orchestrator | 2025-06-01 22:56:59.444256 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-01 22:56:59.444264 | orchestrator | Sunday 01 June 2025 22:49:21 +0000 (0:00:00.740) 0:03:06.388 *********** 2025-06-01 22:56:59.444272 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444280 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.444287 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.444295 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.444303 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.444310 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.444318 | orchestrator | 2025-06-01 22:56:59.444326 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-01 22:56:59.444334 | orchestrator | Sunday 01 June 2025 22:49:22 +0000 (0:00:00.930) 0:03:07.319 *********** 2025-06-01 22:56:59.444342 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444349 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.444357 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.444365 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.444372 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.444380 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.444388 | orchestrator | 2025-06-01 22:56:59.444396 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-01 22:56:59.444404 | orchestrator | Sunday 01 June 2025 22:49:23 +0000 (0:00:00.823) 0:03:08.143 *********** 2025-06-01 22:56:59.444411 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444419 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.444427 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.444434 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.444442 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.444449 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.444457 | orchestrator | 2025-06-01 22:56:59.444470 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-01 22:56:59.444478 | orchestrator | Sunday 01 June 2025 22:49:24 +0000 (0:00:01.021) 0:03:09.164 *********** 2025-06-01 22:56:59.444485 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444493 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.444501 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.444513 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.444521 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.444529 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.444537 | orchestrator | 2025-06-01 22:56:59.444545 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-01 22:56:59.444553 | orchestrator | Sunday 01 June 2025 22:49:25 +0000 (0:00:00.858) 0:03:10.023 *********** 2025-06-01 22:56:59.444560 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444568 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.444576 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.444584 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.444591 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.444599 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.444607 | orchestrator | 2025-06-01 22:56:59.444615 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-01 22:56:59.444623 | orchestrator | Sunday 01 June 2025 22:49:26 +0000 (0:00:00.925) 0:03:10.948 *********** 2025-06-01 22:56:59.444630 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-01 22:56:59.444638 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-01 22:56:59.444646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-01 22:56:59.444654 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444694 | orchestrator | 2025-06-01 22:56:59.444704 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-01 22:56:59.444712 | orchestrator | Sunday 01 June 2025 22:49:26 +0000 (0:00:00.331) 0:03:11.280 *********** 2025-06-01 22:56:59.444724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-01 22:56:59.444732 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-01 22:56:59.444740 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-01 22:56:59.444748 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444756 | orchestrator | 2025-06-01 22:56:59.444764 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-01 22:56:59.444772 | orchestrator | Sunday 01 June 2025 22:49:26 +0000 (0:00:00.384) 0:03:11.665 *********** 2025-06-01 22:56:59.444780 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-01 22:56:59.444788 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-01 22:56:59.444796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-01 22:56:59.444803 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444811 | orchestrator | 2025-06-01 22:56:59.444819 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-01 22:56:59.444827 | orchestrator | Sunday 01 June 2025 22:49:27 +0000 (0:00:00.301) 0:03:11.966 *********** 2025-06-01 22:56:59.444835 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444842 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.444850 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.444858 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.444866 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.444874 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.444882 | orchestrator | 2025-06-01 22:56:59.444890 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-01 22:56:59.444897 | orchestrator | Sunday 01 June 2025 22:49:27 +0000 (0:00:00.585) 0:03:12.552 *********** 2025-06-01 22:56:59.444905 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-01 22:56:59.444913 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.444921 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-01 22:56:59.444934 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.444942 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-01 22:56:59.444950 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.444957 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-01 22:56:59.444965 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-01 22:56:59.444973 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-01 22:56:59.444980 | orchestrator | 2025-06-01 22:56:59.444988 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-01 22:56:59.444996 | orchestrator | Sunday 01 June 2025 22:49:29 +0000 (0:00:01.688) 0:03:14.241 *********** 2025-06-01 22:56:59.445004 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.445011 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.445019 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.445026 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.445034 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.445042 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.445049 | orchestrator | 2025-06-01 22:56:59.445057 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 22:56:59.445065 | orchestrator | Sunday 01 June 2025 22:49:32 +0000 (0:00:03.015) 0:03:17.257 *********** 2025-06-01 22:56:59.445073 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.445080 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.445088 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.445096 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.445103 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.445111 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.445119 | orchestrator | 2025-06-01 22:56:59.445127 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-01 22:56:59.445134 | orchestrator | Sunday 01 June 2025 22:49:33 +0000 (0:00:01.147) 0:03:18.405 *********** 2025-06-01 22:56:59.445142 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445150 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.445158 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.445165 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.445173 | orchestrator | 2025-06-01 22:56:59.445181 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-01 22:56:59.445189 | orchestrator | Sunday 01 June 2025 22:49:34 +0000 (0:00:01.104) 0:03:19.510 *********** 2025-06-01 22:56:59.445196 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.445202 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.445209 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.445216 | orchestrator | 2025-06-01 22:56:59.445222 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-01 22:56:59.445233 | orchestrator | Sunday 01 June 2025 22:49:35 +0000 (0:00:00.419) 0:03:19.929 *********** 2025-06-01 22:56:59.445240 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.445247 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.445253 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.445260 | orchestrator | 2025-06-01 22:56:59.445266 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-01 22:56:59.445273 | orchestrator | Sunday 01 June 2025 22:49:37 +0000 (0:00:01.955) 0:03:21.885 *********** 2025-06-01 22:56:59.445280 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 22:56:59.445286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 22:56:59.445293 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 22:56:59.445299 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.445306 | orchestrator | 2025-06-01 22:56:59.445313 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-01 22:56:59.445319 | orchestrator | Sunday 01 June 2025 22:49:37 +0000 (0:00:00.641) 0:03:22.526 *********** 2025-06-01 22:56:59.445330 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.445337 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.445343 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.445350 | orchestrator | 2025-06-01 22:56:59.445356 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-01 22:56:59.445363 | orchestrator | Sunday 01 June 2025 22:49:38 +0000 (0:00:00.343) 0:03:22.869 *********** 2025-06-01 22:56:59.445370 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.445380 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.445387 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.445393 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.445400 | orchestrator | 2025-06-01 22:56:59.445407 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-01 22:56:59.445413 | orchestrator | Sunday 01 June 2025 22:49:39 +0000 (0:00:01.030) 0:03:23.899 *********** 2025-06-01 22:56:59.445420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:56:59.445426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:56:59.445433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:56:59.445440 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445446 | orchestrator | 2025-06-01 22:56:59.445453 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-01 22:56:59.445460 | orchestrator | Sunday 01 June 2025 22:49:39 +0000 (0:00:00.433) 0:03:24.333 *********** 2025-06-01 22:56:59.445466 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445473 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.445479 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.445486 | orchestrator | 2025-06-01 22:56:59.445493 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-01 22:56:59.445499 | orchestrator | Sunday 01 June 2025 22:49:39 +0000 (0:00:00.354) 0:03:24.687 *********** 2025-06-01 22:56:59.445506 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445512 | orchestrator | 2025-06-01 22:56:59.445519 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-01 22:56:59.445525 | orchestrator | Sunday 01 June 2025 22:49:40 +0000 (0:00:00.246) 0:03:24.933 *********** 2025-06-01 22:56:59.445532 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445538 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.445545 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.445552 | orchestrator | 2025-06-01 22:56:59.445558 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-01 22:56:59.445565 | orchestrator | Sunday 01 June 2025 22:49:40 +0000 (0:00:00.361) 0:03:25.294 *********** 2025-06-01 22:56:59.445571 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445578 | orchestrator | 2025-06-01 22:56:59.445584 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-01 22:56:59.445591 | orchestrator | Sunday 01 June 2025 22:49:40 +0000 (0:00:00.208) 0:03:25.503 *********** 2025-06-01 22:56:59.445597 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445604 | orchestrator | 2025-06-01 22:56:59.445610 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-01 22:56:59.445617 | orchestrator | Sunday 01 June 2025 22:49:40 +0000 (0:00:00.209) 0:03:25.713 *********** 2025-06-01 22:56:59.445623 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445630 | orchestrator | 2025-06-01 22:56:59.445637 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-01 22:56:59.445643 | orchestrator | Sunday 01 June 2025 22:49:41 +0000 (0:00:00.404) 0:03:26.117 *********** 2025-06-01 22:56:59.445650 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445656 | orchestrator | 2025-06-01 22:56:59.445675 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-01 22:56:59.445682 | orchestrator | Sunday 01 June 2025 22:49:41 +0000 (0:00:00.212) 0:03:26.330 *********** 2025-06-01 22:56:59.445693 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445700 | orchestrator | 2025-06-01 22:56:59.445706 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-01 22:56:59.445713 | orchestrator | Sunday 01 June 2025 22:49:41 +0000 (0:00:00.233) 0:03:26.564 *********** 2025-06-01 22:56:59.445720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:56:59.445726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:56:59.445733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:56:59.445739 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445746 | orchestrator | 2025-06-01 22:56:59.445753 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-01 22:56:59.445759 | orchestrator | Sunday 01 June 2025 22:49:42 +0000 (0:00:00.434) 0:03:26.998 *********** 2025-06-01 22:56:59.445766 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445772 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.445779 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.445785 | orchestrator | 2025-06-01 22:56:59.445796 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-01 22:56:59.445803 | orchestrator | Sunday 01 June 2025 22:49:42 +0000 (0:00:00.344) 0:03:27.343 *********** 2025-06-01 22:56:59.445809 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445816 | orchestrator | 2025-06-01 22:56:59.445823 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-01 22:56:59.445829 | orchestrator | Sunday 01 June 2025 22:49:42 +0000 (0:00:00.245) 0:03:27.588 *********** 2025-06-01 22:56:59.445836 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.445842 | orchestrator | 2025-06-01 22:56:59.445849 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-01 22:56:59.445856 | orchestrator | Sunday 01 June 2025 22:49:42 +0000 (0:00:00.197) 0:03:27.786 *********** 2025-06-01 22:56:59.445862 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.445869 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.445875 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.445882 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.445889 | orchestrator | 2025-06-01 22:56:59.445895 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-01 22:56:59.445902 | orchestrator | Sunday 01 June 2025 22:49:43 +0000 (0:00:01.033) 0:03:28.819 *********** 2025-06-01 22:56:59.445909 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.445915 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.445922 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.445928 | orchestrator | 2025-06-01 22:56:59.445940 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-01 22:56:59.445946 | orchestrator | Sunday 01 June 2025 22:49:44 +0000 (0:00:00.343) 0:03:29.163 *********** 2025-06-01 22:56:59.445953 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.445960 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.445966 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.445973 | orchestrator | 2025-06-01 22:56:59.445980 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-01 22:56:59.445986 | orchestrator | Sunday 01 June 2025 22:49:45 +0000 (0:00:01.388) 0:03:30.551 *********** 2025-06-01 22:56:59.445993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:56:59.445999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:56:59.446006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:56:59.446013 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.446107 | orchestrator | 2025-06-01 22:56:59.446114 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-01 22:56:59.446121 | orchestrator | Sunday 01 June 2025 22:49:46 +0000 (0:00:01.097) 0:03:31.648 *********** 2025-06-01 22:56:59.446133 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.446140 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.446147 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.446153 | orchestrator | 2025-06-01 22:56:59.446160 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-01 22:56:59.446166 | orchestrator | Sunday 01 June 2025 22:49:47 +0000 (0:00:00.391) 0:03:32.039 *********** 2025-06-01 22:56:59.446173 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.446179 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.446186 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.446193 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.446199 | orchestrator | 2025-06-01 22:56:59.446206 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-01 22:56:59.446212 | orchestrator | Sunday 01 June 2025 22:49:48 +0000 (0:00:01.040) 0:03:33.080 *********** 2025-06-01 22:56:59.446219 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.446226 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.446232 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.446239 | orchestrator | 2025-06-01 22:56:59.446245 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-01 22:56:59.446252 | orchestrator | Sunday 01 June 2025 22:49:48 +0000 (0:00:00.384) 0:03:33.465 *********** 2025-06-01 22:56:59.446258 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.446265 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.446271 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.446278 | orchestrator | 2025-06-01 22:56:59.446285 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-01 22:56:59.446291 | orchestrator | Sunday 01 June 2025 22:49:49 +0000 (0:00:01.341) 0:03:34.806 *********** 2025-06-01 22:56:59.446298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:56:59.446304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:56:59.446311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:56:59.446318 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.446324 | orchestrator | 2025-06-01 22:56:59.446331 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-01 22:56:59.446338 | orchestrator | Sunday 01 June 2025 22:49:50 +0000 (0:00:00.877) 0:03:35.684 *********** 2025-06-01 22:56:59.446344 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.446351 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.446357 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.446364 | orchestrator | 2025-06-01 22:56:59.446370 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-01 22:56:59.446377 | orchestrator | Sunday 01 June 2025 22:49:51 +0000 (0:00:00.406) 0:03:36.091 *********** 2025-06-01 22:56:59.446384 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.446390 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.446397 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.446403 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.446410 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.446416 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.446423 | orchestrator | 2025-06-01 22:56:59.446429 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-01 22:56:59.446436 | orchestrator | Sunday 01 June 2025 22:49:52 +0000 (0:00:00.853) 0:03:36.944 *********** 2025-06-01 22:56:59.446468 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.446476 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.446482 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.446489 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.446496 | orchestrator | 2025-06-01 22:56:59.446503 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-01 22:56:59.446514 | orchestrator | Sunday 01 June 2025 22:49:53 +0000 (0:00:01.111) 0:03:38.056 *********** 2025-06-01 22:56:59.446521 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.446527 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.446534 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.446541 | orchestrator | 2025-06-01 22:56:59.446547 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-01 22:56:59.446554 | orchestrator | Sunday 01 June 2025 22:49:53 +0000 (0:00:00.399) 0:03:38.455 *********** 2025-06-01 22:56:59.446560 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.446567 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.446573 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.446580 | orchestrator | 2025-06-01 22:56:59.446587 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-01 22:56:59.446593 | orchestrator | Sunday 01 June 2025 22:49:54 +0000 (0:00:01.206) 0:03:39.662 *********** 2025-06-01 22:56:59.446600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 22:56:59.446606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 22:56:59.446617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 22:56:59.446624 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.446630 | orchestrator | 2025-06-01 22:56:59.446637 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-01 22:56:59.446644 | orchestrator | Sunday 01 June 2025 22:49:55 +0000 (0:00:01.005) 0:03:40.668 *********** 2025-06-01 22:56:59.446650 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.446657 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.446676 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.446683 | orchestrator | 2025-06-01 22:56:59.446690 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-01 22:56:59.446697 | orchestrator | 2025-06-01 22:56:59.446704 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 22:56:59.446710 | orchestrator | Sunday 01 June 2025 22:49:56 +0000 (0:00:00.877) 0:03:41.546 *********** 2025-06-01 22:56:59.446717 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.446724 | orchestrator | 2025-06-01 22:56:59.446731 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 22:56:59.446738 | orchestrator | Sunday 01 June 2025 22:49:57 +0000 (0:00:00.562) 0:03:42.108 *********** 2025-06-01 22:56:59.446744 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.446751 | orchestrator | 2025-06-01 22:56:59.446758 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 22:56:59.446765 | orchestrator | Sunday 01 June 2025 22:49:57 +0000 (0:00:00.731) 0:03:42.840 *********** 2025-06-01 22:56:59.446771 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.446778 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.446785 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.446792 | orchestrator | 2025-06-01 22:56:59.446799 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 22:56:59.446805 | orchestrator | Sunday 01 June 2025 22:49:58 +0000 (0:00:00.730) 0:03:43.570 *********** 2025-06-01 22:56:59.446812 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.446819 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.446825 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.446832 | orchestrator | 2025-06-01 22:56:59.446839 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 22:56:59.446846 | orchestrator | Sunday 01 June 2025 22:49:59 +0000 (0:00:00.313) 0:03:43.884 *********** 2025-06-01 22:56:59.446852 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.446859 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.446866 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.446878 | orchestrator | 2025-06-01 22:56:59.446885 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 22:56:59.446892 | orchestrator | Sunday 01 June 2025 22:49:59 +0000 (0:00:00.297) 0:03:44.181 *********** 2025-06-01 22:56:59.446898 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.446905 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.446912 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.446919 | orchestrator | 2025-06-01 22:56:59.446925 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 22:56:59.446932 | orchestrator | Sunday 01 June 2025 22:49:59 +0000 (0:00:00.586) 0:03:44.768 *********** 2025-06-01 22:56:59.446939 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.446946 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.446952 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.446959 | orchestrator | 2025-06-01 22:56:59.446966 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 22:56:59.446973 | orchestrator | Sunday 01 June 2025 22:50:00 +0000 (0:00:00.746) 0:03:45.514 *********** 2025-06-01 22:56:59.446979 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.446986 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.446993 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.446999 | orchestrator | 2025-06-01 22:56:59.447006 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 22:56:59.447013 | orchestrator | Sunday 01 June 2025 22:50:01 +0000 (0:00:00.351) 0:03:45.866 *********** 2025-06-01 22:56:59.447020 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.447026 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.447033 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.447039 | orchestrator | 2025-06-01 22:56:59.447046 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 22:56:59.447075 | orchestrator | Sunday 01 June 2025 22:50:01 +0000 (0:00:00.322) 0:03:46.188 *********** 2025-06-01 22:56:59.447083 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.447090 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.447096 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.447103 | orchestrator | 2025-06-01 22:56:59.447110 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 22:56:59.447116 | orchestrator | Sunday 01 June 2025 22:50:02 +0000 (0:00:01.130) 0:03:47.318 *********** 2025-06-01 22:56:59.447123 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.447130 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.447136 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.447143 | orchestrator | 2025-06-01 22:56:59.447150 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 22:56:59.447156 | orchestrator | Sunday 01 June 2025 22:50:03 +0000 (0:00:00.735) 0:03:48.054 *********** 2025-06-01 22:56:59.447163 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.447170 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.447176 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.447183 | orchestrator | 2025-06-01 22:56:59.447190 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 22:56:59.447196 | orchestrator | Sunday 01 June 2025 22:50:03 +0000 (0:00:00.315) 0:03:48.370 *********** 2025-06-01 22:56:59.447203 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.447210 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.447216 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.447223 | orchestrator | 2025-06-01 22:56:59.447230 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 22:56:59.447243 | orchestrator | Sunday 01 June 2025 22:50:03 +0000 (0:00:00.328) 0:03:48.698 *********** 2025-06-01 22:56:59.447250 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.447256 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.447263 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.447270 | orchestrator | 2025-06-01 22:56:59.447276 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 22:56:59.447288 | orchestrator | Sunday 01 June 2025 22:50:04 +0000 (0:00:00.601) 0:03:49.300 *********** 2025-06-01 22:56:59.447294 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.447301 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.447307 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.447314 | orchestrator | 2025-06-01 22:56:59.447321 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 22:56:59.447327 | orchestrator | Sunday 01 June 2025 22:50:04 +0000 (0:00:00.317) 0:03:49.617 *********** 2025-06-01 22:56:59.447334 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.447341 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.447347 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.447354 | orchestrator | 2025-06-01 22:56:59.447360 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 22:56:59.447367 | orchestrator | Sunday 01 June 2025 22:50:05 +0000 (0:00:00.328) 0:03:49.945 *********** 2025-06-01 22:56:59.447373 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.447380 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.447386 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.447393 | orchestrator | 2025-06-01 22:56:59.447400 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 22:56:59.447406 | orchestrator | Sunday 01 June 2025 22:50:05 +0000 (0:00:00.378) 0:03:50.323 *********** 2025-06-01 22:56:59.447413 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.447419 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.447426 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.447432 | orchestrator | 2025-06-01 22:56:59.447439 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 22:56:59.447445 | orchestrator | Sunday 01 June 2025 22:50:06 +0000 (0:00:00.662) 0:03:50.986 *********** 2025-06-01 22:56:59.447452 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.447458 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.447465 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.447472 | orchestrator | 2025-06-01 22:56:59.447478 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 22:56:59.447485 | orchestrator | Sunday 01 June 2025 22:50:06 +0000 (0:00:00.438) 0:03:51.424 *********** 2025-06-01 22:56:59.447492 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.447498 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.447505 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.447511 | orchestrator | 2025-06-01 22:56:59.447518 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 22:56:59.447525 | orchestrator | Sunday 01 June 2025 22:50:06 +0000 (0:00:00.372) 0:03:51.796 *********** 2025-06-01 22:56:59.447531 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.447538 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.447545 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.447551 | orchestrator | 2025-06-01 22:56:59.447558 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-01 22:56:59.447564 | orchestrator | Sunday 01 June 2025 22:50:07 +0000 (0:00:00.806) 0:03:52.603 *********** 2025-06-01 22:56:59.447571 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.447577 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.447584 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.447590 | orchestrator | 2025-06-01 22:56:59.447597 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-01 22:56:59.447604 | orchestrator | Sunday 01 June 2025 22:50:08 +0000 (0:00:00.421) 0:03:53.025 *********** 2025-06-01 22:56:59.447610 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.447617 | orchestrator | 2025-06-01 22:56:59.447623 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-01 22:56:59.447630 | orchestrator | Sunday 01 June 2025 22:50:08 +0000 (0:00:00.486) 0:03:53.511 *********** 2025-06-01 22:56:59.447636 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.447760 | orchestrator | 2025-06-01 22:56:59.447768 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-01 22:56:59.447775 | orchestrator | Sunday 01 June 2025 22:50:08 +0000 (0:00:00.118) 0:03:53.630 *********** 2025-06-01 22:56:59.447781 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 22:56:59.447787 | orchestrator | 2025-06-01 22:56:59.447815 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-01 22:56:59.447823 | orchestrator | Sunday 01 June 2025 22:50:09 +0000 (0:00:01.199) 0:03:54.829 *********** 2025-06-01 22:56:59.447829 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.447835 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.447841 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.447847 | orchestrator | 2025-06-01 22:56:59.447854 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-01 22:56:59.447860 | orchestrator | Sunday 01 June 2025 22:50:10 +0000 (0:00:00.396) 0:03:55.226 *********** 2025-06-01 22:56:59.447866 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.447872 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.447879 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.447885 | orchestrator | 2025-06-01 22:56:59.447891 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-01 22:56:59.447897 | orchestrator | Sunday 01 June 2025 22:50:10 +0000 (0:00:00.359) 0:03:55.586 *********** 2025-06-01 22:56:59.447903 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.447909 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.447916 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.447922 | orchestrator | 2025-06-01 22:56:59.447928 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-01 22:56:59.447934 | orchestrator | Sunday 01 June 2025 22:50:12 +0000 (0:00:01.341) 0:03:56.927 *********** 2025-06-01 22:56:59.447940 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.447946 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.447953 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.447959 | orchestrator | 2025-06-01 22:56:59.447969 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-01 22:56:59.447975 | orchestrator | Sunday 01 June 2025 22:50:12 +0000 (0:00:00.934) 0:03:57.861 *********** 2025-06-01 22:56:59.447982 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.447988 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.447994 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.448000 | orchestrator | 2025-06-01 22:56:59.448006 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-01 22:56:59.448013 | orchestrator | Sunday 01 June 2025 22:50:13 +0000 (0:00:00.659) 0:03:58.521 *********** 2025-06-01 22:56:59.448019 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.448025 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.448031 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.448037 | orchestrator | 2025-06-01 22:56:59.448044 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-01 22:56:59.448050 | orchestrator | Sunday 01 June 2025 22:50:14 +0000 (0:00:00.636) 0:03:59.158 *********** 2025-06-01 22:56:59.448056 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.448062 | orchestrator | 2025-06-01 22:56:59.448068 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-01 22:56:59.448074 | orchestrator | Sunday 01 June 2025 22:50:15 +0000 (0:00:01.286) 0:04:00.444 *********** 2025-06-01 22:56:59.448081 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.448087 | orchestrator | 2025-06-01 22:56:59.448093 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-01 22:56:59.448099 | orchestrator | Sunday 01 June 2025 22:50:16 +0000 (0:00:00.709) 0:04:01.154 *********** 2025-06-01 22:56:59.448105 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 22:56:59.448111 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.448123 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.448130 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 22:56:59.448136 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-01 22:56:59.448142 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 22:56:59.448148 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 22:56:59.448154 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-01 22:56:59.448160 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-01 22:56:59.448166 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-01 22:56:59.448173 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 22:56:59.448179 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-01 22:56:59.448185 | orchestrator | 2025-06-01 22:56:59.448191 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-01 22:56:59.448197 | orchestrator | Sunday 01 June 2025 22:50:19 +0000 (0:00:03.580) 0:04:04.734 *********** 2025-06-01 22:56:59.448203 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.448209 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.448216 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.448222 | orchestrator | 2025-06-01 22:56:59.448228 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-01 22:56:59.448234 | orchestrator | Sunday 01 June 2025 22:50:21 +0000 (0:00:01.578) 0:04:06.313 *********** 2025-06-01 22:56:59.448240 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.448246 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.448252 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.448258 | orchestrator | 2025-06-01 22:56:59.448264 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-01 22:56:59.448270 | orchestrator | Sunday 01 June 2025 22:50:21 +0000 (0:00:00.410) 0:04:06.723 *********** 2025-06-01 22:56:59.448277 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.448283 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.448289 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.448295 | orchestrator | 2025-06-01 22:56:59.448301 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-01 22:56:59.448307 | orchestrator | Sunday 01 June 2025 22:50:22 +0000 (0:00:00.440) 0:04:07.163 *********** 2025-06-01 22:56:59.448313 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.448319 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.448325 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.448332 | orchestrator | 2025-06-01 22:56:59.448338 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-01 22:56:59.448361 | orchestrator | Sunday 01 June 2025 22:50:24 +0000 (0:00:02.396) 0:04:09.560 *********** 2025-06-01 22:56:59.448368 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.448374 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.448381 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.448387 | orchestrator | 2025-06-01 22:56:59.448393 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-01 22:56:59.448399 | orchestrator | Sunday 01 June 2025 22:50:26 +0000 (0:00:02.089) 0:04:11.649 *********** 2025-06-01 22:56:59.448405 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.448412 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.448418 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.448424 | orchestrator | 2025-06-01 22:56:59.448430 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-01 22:56:59.448437 | orchestrator | Sunday 01 June 2025 22:50:27 +0000 (0:00:00.344) 0:04:11.994 *********** 2025-06-01 22:56:59.448443 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.448449 | orchestrator | 2025-06-01 22:56:59.448455 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-01 22:56:59.448466 | orchestrator | Sunday 01 June 2025 22:50:27 +0000 (0:00:00.467) 0:04:12.462 *********** 2025-06-01 22:56:59.448472 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.448478 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.448484 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.448490 | orchestrator | 2025-06-01 22:56:59.448500 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-01 22:56:59.448506 | orchestrator | Sunday 01 June 2025 22:50:28 +0000 (0:00:00.493) 0:04:12.955 *********** 2025-06-01 22:56:59.448512 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.448518 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.448524 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.448530 | orchestrator | 2025-06-01 22:56:59.448537 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-01 22:56:59.448543 | orchestrator | Sunday 01 June 2025 22:50:28 +0000 (0:00:00.302) 0:04:13.258 *********** 2025-06-01 22:56:59.448549 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.448555 | orchestrator | 2025-06-01 22:56:59.448561 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-01 22:56:59.448567 | orchestrator | Sunday 01 June 2025 22:50:28 +0000 (0:00:00.521) 0:04:13.780 *********** 2025-06-01 22:56:59.448574 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.448580 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.448586 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.448592 | orchestrator | 2025-06-01 22:56:59.448598 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-01 22:56:59.448604 | orchestrator | Sunday 01 June 2025 22:50:30 +0000 (0:00:01.870) 0:04:15.651 *********** 2025-06-01 22:56:59.448610 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.448617 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.448623 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.448629 | orchestrator | 2025-06-01 22:56:59.448635 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-01 22:56:59.448641 | orchestrator | Sunday 01 June 2025 22:50:31 +0000 (0:00:01.158) 0:04:16.809 *********** 2025-06-01 22:56:59.448647 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.448653 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.448672 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.448679 | orchestrator | 2025-06-01 22:56:59.448685 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-01 22:56:59.448691 | orchestrator | Sunday 01 June 2025 22:50:33 +0000 (0:00:01.888) 0:04:18.698 *********** 2025-06-01 22:56:59.448697 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.448703 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.448709 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.448715 | orchestrator | 2025-06-01 22:56:59.448722 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-01 22:56:59.448728 | orchestrator | Sunday 01 June 2025 22:50:35 +0000 (0:00:01.956) 0:04:20.655 *********** 2025-06-01 22:56:59.448734 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.448740 | orchestrator | 2025-06-01 22:56:59.448746 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-01 22:56:59.448752 | orchestrator | Sunday 01 June 2025 22:50:36 +0000 (0:00:00.715) 0:04:21.370 *********** 2025-06-01 22:56:59.448758 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.448765 | orchestrator | 2025-06-01 22:56:59.448771 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-01 22:56:59.448777 | orchestrator | Sunday 01 June 2025 22:50:37 +0000 (0:00:01.135) 0:04:22.506 *********** 2025-06-01 22:56:59.448783 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.448789 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.448795 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.448806 | orchestrator | 2025-06-01 22:56:59.448812 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-01 22:56:59.448818 | orchestrator | Sunday 01 June 2025 22:50:47 +0000 (0:00:09.497) 0:04:32.003 *********** 2025-06-01 22:56:59.448825 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.448831 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.448837 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.448843 | orchestrator | 2025-06-01 22:56:59.448849 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-01 22:56:59.448855 | orchestrator | Sunday 01 June 2025 22:50:47 +0000 (0:00:00.687) 0:04:32.691 *********** 2025-06-01 22:56:59.448882 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e3e7b0957baf8115ce1aa82c6e1c050db579da06'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-01 22:56:59.448892 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e3e7b0957baf8115ce1aa82c6e1c050db579da06'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-01 22:56:59.448899 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e3e7b0957baf8115ce1aa82c6e1c050db579da06'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-01 22:56:59.448910 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e3e7b0957baf8115ce1aa82c6e1c050db579da06'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-01 22:56:59.448918 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e3e7b0957baf8115ce1aa82c6e1c050db579da06'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-01 22:56:59.448925 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e3e7b0957baf8115ce1aa82c6e1c050db579da06'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__e3e7b0957baf8115ce1aa82c6e1c050db579da06'}])  2025-06-01 22:56:59.448933 | orchestrator | 2025-06-01 22:56:59.448940 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 22:56:59.448946 | orchestrator | Sunday 01 June 2025 22:51:01 +0000 (0:00:13.692) 0:04:46.383 *********** 2025-06-01 22:56:59.448952 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.448958 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.448964 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.448970 | orchestrator | 2025-06-01 22:56:59.448976 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-01 22:56:59.448982 | orchestrator | Sunday 01 June 2025 22:51:01 +0000 (0:00:00.347) 0:04:46.731 *********** 2025-06-01 22:56:59.448988 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.448999 | orchestrator | 2025-06-01 22:56:59.449005 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-01 22:56:59.449011 | orchestrator | Sunday 01 June 2025 22:51:02 +0000 (0:00:00.760) 0:04:47.491 *********** 2025-06-01 22:56:59.449018 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.449024 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.449030 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.449036 | orchestrator | 2025-06-01 22:56:59.449042 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-01 22:56:59.449048 | orchestrator | Sunday 01 June 2025 22:51:02 +0000 (0:00:00.342) 0:04:47.834 *********** 2025-06-01 22:56:59.449054 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449060 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.449066 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.449072 | orchestrator | 2025-06-01 22:56:59.449079 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-01 22:56:59.449085 | orchestrator | Sunday 01 June 2025 22:51:03 +0000 (0:00:00.342) 0:04:48.176 *********** 2025-06-01 22:56:59.449091 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 22:56:59.449097 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 22:56:59.449103 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 22:56:59.449109 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449115 | orchestrator | 2025-06-01 22:56:59.449121 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-01 22:56:59.449127 | orchestrator | Sunday 01 June 2025 22:51:04 +0000 (0:00:00.850) 0:04:49.027 *********** 2025-06-01 22:56:59.449133 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.449139 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.449146 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.449152 | orchestrator | 2025-06-01 22:56:59.449158 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-01 22:56:59.449164 | orchestrator | 2025-06-01 22:56:59.449170 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 22:56:59.449176 | orchestrator | Sunday 01 June 2025 22:51:05 +0000 (0:00:00.852) 0:04:49.879 *********** 2025-06-01 22:56:59.449200 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.449207 | orchestrator | 2025-06-01 22:56:59.449213 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 22:56:59.449220 | orchestrator | Sunday 01 June 2025 22:51:05 +0000 (0:00:00.500) 0:04:50.380 *********** 2025-06-01 22:56:59.449226 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.449232 | orchestrator | 2025-06-01 22:56:59.449238 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 22:56:59.449244 | orchestrator | Sunday 01 June 2025 22:51:06 +0000 (0:00:00.770) 0:04:51.151 *********** 2025-06-01 22:56:59.449251 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.449257 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.449263 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.449269 | orchestrator | 2025-06-01 22:56:59.449275 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 22:56:59.449281 | orchestrator | Sunday 01 June 2025 22:51:07 +0000 (0:00:00.715) 0:04:51.867 *********** 2025-06-01 22:56:59.449287 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449294 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.449300 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.449306 | orchestrator | 2025-06-01 22:56:59.449312 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 22:56:59.449322 | orchestrator | Sunday 01 June 2025 22:51:07 +0000 (0:00:00.328) 0:04:52.195 *********** 2025-06-01 22:56:59.449329 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449335 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.449345 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.449352 | orchestrator | 2025-06-01 22:56:59.449358 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 22:56:59.449364 | orchestrator | Sunday 01 June 2025 22:51:07 +0000 (0:00:00.573) 0:04:52.768 *********** 2025-06-01 22:56:59.449370 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449376 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.449382 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.449388 | orchestrator | 2025-06-01 22:56:59.449394 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 22:56:59.449401 | orchestrator | Sunday 01 June 2025 22:51:08 +0000 (0:00:00.317) 0:04:53.086 *********** 2025-06-01 22:56:59.449407 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.449413 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.449419 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.449425 | orchestrator | 2025-06-01 22:56:59.449431 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 22:56:59.449437 | orchestrator | Sunday 01 June 2025 22:51:08 +0000 (0:00:00.705) 0:04:53.791 *********** 2025-06-01 22:56:59.449443 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449449 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.449455 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.449461 | orchestrator | 2025-06-01 22:56:59.449467 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 22:56:59.449474 | orchestrator | Sunday 01 June 2025 22:51:09 +0000 (0:00:00.294) 0:04:54.086 *********** 2025-06-01 22:56:59.449480 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449486 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.449492 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.449498 | orchestrator | 2025-06-01 22:56:59.449504 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 22:56:59.449510 | orchestrator | Sunday 01 June 2025 22:51:09 +0000 (0:00:00.570) 0:04:54.657 *********** 2025-06-01 22:56:59.449516 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.449523 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.449529 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.449535 | orchestrator | 2025-06-01 22:56:59.449541 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 22:56:59.449547 | orchestrator | Sunday 01 June 2025 22:51:10 +0000 (0:00:00.756) 0:04:55.413 *********** 2025-06-01 22:56:59.449553 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.449559 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.449565 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.449571 | orchestrator | 2025-06-01 22:56:59.449577 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 22:56:59.449583 | orchestrator | Sunday 01 June 2025 22:51:11 +0000 (0:00:00.797) 0:04:56.211 *********** 2025-06-01 22:56:59.449590 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449596 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.449602 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.449608 | orchestrator | 2025-06-01 22:56:59.449614 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 22:56:59.449620 | orchestrator | Sunday 01 June 2025 22:51:11 +0000 (0:00:00.308) 0:04:56.519 *********** 2025-06-01 22:56:59.449626 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.449632 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.449638 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.449644 | orchestrator | 2025-06-01 22:56:59.449651 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 22:56:59.449657 | orchestrator | Sunday 01 June 2025 22:51:12 +0000 (0:00:00.633) 0:04:57.153 *********** 2025-06-01 22:56:59.449677 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449683 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.449689 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.449700 | orchestrator | 2025-06-01 22:56:59.449706 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 22:56:59.449713 | orchestrator | Sunday 01 June 2025 22:51:12 +0000 (0:00:00.395) 0:04:57.549 *********** 2025-06-01 22:56:59.449719 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449725 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.449731 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.449738 | orchestrator | 2025-06-01 22:56:59.449744 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 22:56:59.449750 | orchestrator | Sunday 01 June 2025 22:51:12 +0000 (0:00:00.286) 0:04:57.835 *********** 2025-06-01 22:56:59.449775 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449783 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.449789 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.449795 | orchestrator | 2025-06-01 22:56:59.449801 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 22:56:59.449808 | orchestrator | Sunday 01 June 2025 22:51:13 +0000 (0:00:00.301) 0:04:58.137 *********** 2025-06-01 22:56:59.449814 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449820 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.449826 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.449832 | orchestrator | 2025-06-01 22:56:59.449838 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 22:56:59.449845 | orchestrator | Sunday 01 June 2025 22:51:13 +0000 (0:00:00.597) 0:04:58.734 *********** 2025-06-01 22:56:59.449851 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.449857 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.449863 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.449869 | orchestrator | 2025-06-01 22:56:59.449875 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 22:56:59.449881 | orchestrator | Sunday 01 June 2025 22:51:14 +0000 (0:00:00.309) 0:04:59.044 *********** 2025-06-01 22:56:59.449888 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.449894 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.449900 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.449906 | orchestrator | 2025-06-01 22:56:59.449912 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 22:56:59.449922 | orchestrator | Sunday 01 June 2025 22:51:14 +0000 (0:00:00.357) 0:04:59.401 *********** 2025-06-01 22:56:59.449929 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.449935 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.449941 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.449947 | orchestrator | 2025-06-01 22:56:59.449953 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 22:56:59.449960 | orchestrator | Sunday 01 June 2025 22:51:14 +0000 (0:00:00.335) 0:04:59.737 *********** 2025-06-01 22:56:59.449966 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.449972 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.449978 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.449984 | orchestrator | 2025-06-01 22:56:59.449991 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-01 22:56:59.449997 | orchestrator | Sunday 01 June 2025 22:51:15 +0000 (0:00:00.895) 0:05:00.632 *********** 2025-06-01 22:56:59.450003 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 22:56:59.450009 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 22:56:59.450033 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 22:56:59.450041 | orchestrator | 2025-06-01 22:56:59.450047 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-01 22:56:59.450053 | orchestrator | Sunday 01 June 2025 22:51:16 +0000 (0:00:00.663) 0:05:01.296 *********** 2025-06-01 22:56:59.450059 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.450065 | orchestrator | 2025-06-01 22:56:59.450076 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-01 22:56:59.450083 | orchestrator | Sunday 01 June 2025 22:51:16 +0000 (0:00:00.526) 0:05:01.822 *********** 2025-06-01 22:56:59.450089 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.450095 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.450101 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.450107 | orchestrator | 2025-06-01 22:56:59.450113 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-01 22:56:59.450119 | orchestrator | Sunday 01 June 2025 22:51:17 +0000 (0:00:00.987) 0:05:02.809 *********** 2025-06-01 22:56:59.450125 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.450131 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.450137 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.450143 | orchestrator | 2025-06-01 22:56:59.450150 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-01 22:56:59.450156 | orchestrator | Sunday 01 June 2025 22:51:18 +0000 (0:00:00.323) 0:05:03.133 *********** 2025-06-01 22:56:59.450162 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 22:56:59.450168 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 22:56:59.450174 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 22:56:59.450180 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-01 22:56:59.450186 | orchestrator | 2025-06-01 22:56:59.450192 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-01 22:56:59.450198 | orchestrator | Sunday 01 June 2025 22:51:28 +0000 (0:00:09.858) 0:05:12.991 *********** 2025-06-01 22:56:59.450204 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.450210 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.450216 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.450223 | orchestrator | 2025-06-01 22:56:59.450229 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-01 22:56:59.450235 | orchestrator | Sunday 01 June 2025 22:51:28 +0000 (0:00:00.363) 0:05:13.355 *********** 2025-06-01 22:56:59.450241 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-01 22:56:59.450247 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-01 22:56:59.450253 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-01 22:56:59.450259 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-01 22:56:59.450265 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.450271 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.450278 | orchestrator | 2025-06-01 22:56:59.450284 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-01 22:56:59.450290 | orchestrator | Sunday 01 June 2025 22:51:31 +0000 (0:00:02.861) 0:05:16.216 *********** 2025-06-01 22:56:59.450296 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-01 22:56:59.450302 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-01 22:56:59.450328 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-01 22:56:59.450335 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 22:56:59.450341 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-01 22:56:59.450347 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-01 22:56:59.450354 | orchestrator | 2025-06-01 22:56:59.450360 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-01 22:56:59.450366 | orchestrator | Sunday 01 June 2025 22:51:32 +0000 (0:00:01.323) 0:05:17.539 *********** 2025-06-01 22:56:59.450372 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.450378 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.450385 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.450391 | orchestrator | 2025-06-01 22:56:59.450397 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-01 22:56:59.450403 | orchestrator | Sunday 01 June 2025 22:51:33 +0000 (0:00:00.671) 0:05:18.211 *********** 2025-06-01 22:56:59.450417 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.450423 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.450429 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.450435 | orchestrator | 2025-06-01 22:56:59.450441 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-01 22:56:59.450447 | orchestrator | Sunday 01 June 2025 22:51:33 +0000 (0:00:00.305) 0:05:18.516 *********** 2025-06-01 22:56:59.450454 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.450460 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.450466 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.450472 | orchestrator | 2025-06-01 22:56:59.450483 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-01 22:56:59.450489 | orchestrator | Sunday 01 June 2025 22:51:33 +0000 (0:00:00.303) 0:05:18.820 *********** 2025-06-01 22:56:59.450495 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.450501 | orchestrator | 2025-06-01 22:56:59.450508 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-01 22:56:59.450514 | orchestrator | Sunday 01 June 2025 22:51:34 +0000 (0:00:00.858) 0:05:19.679 *********** 2025-06-01 22:56:59.450520 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.450526 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.450532 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.450539 | orchestrator | 2025-06-01 22:56:59.450545 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-01 22:56:59.450551 | orchestrator | Sunday 01 June 2025 22:51:35 +0000 (0:00:00.388) 0:05:20.067 *********** 2025-06-01 22:56:59.450557 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.450563 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.450570 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.450576 | orchestrator | 2025-06-01 22:56:59.450582 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-01 22:56:59.450588 | orchestrator | Sunday 01 June 2025 22:51:35 +0000 (0:00:00.361) 0:05:20.429 *********** 2025-06-01 22:56:59.450594 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.450601 | orchestrator | 2025-06-01 22:56:59.450607 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-01 22:56:59.450613 | orchestrator | Sunday 01 June 2025 22:51:36 +0000 (0:00:01.011) 0:05:21.440 *********** 2025-06-01 22:56:59.450619 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.450625 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.450631 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.450638 | orchestrator | 2025-06-01 22:56:59.450644 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-01 22:56:59.450650 | orchestrator | Sunday 01 June 2025 22:51:37 +0000 (0:00:01.246) 0:05:22.686 *********** 2025-06-01 22:56:59.450656 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.450677 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.450683 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.450689 | orchestrator | 2025-06-01 22:56:59.450695 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-01 22:56:59.450702 | orchestrator | Sunday 01 June 2025 22:51:38 +0000 (0:00:01.158) 0:05:23.845 *********** 2025-06-01 22:56:59.450708 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.450714 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.450721 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.450727 | orchestrator | 2025-06-01 22:56:59.450733 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-01 22:56:59.450739 | orchestrator | Sunday 01 June 2025 22:51:41 +0000 (0:00:02.069) 0:05:25.915 *********** 2025-06-01 22:56:59.450745 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.450752 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.450763 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.450769 | orchestrator | 2025-06-01 22:56:59.450775 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-01 22:56:59.450782 | orchestrator | Sunday 01 June 2025 22:51:43 +0000 (0:00:02.055) 0:05:27.970 *********** 2025-06-01 22:56:59.450788 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.450794 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.450800 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-01 22:56:59.450807 | orchestrator | 2025-06-01 22:56:59.450813 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-01 22:56:59.450819 | orchestrator | Sunday 01 June 2025 22:51:43 +0000 (0:00:00.441) 0:05:28.411 *********** 2025-06-01 22:56:59.450825 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-01 22:56:59.450832 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-01 22:56:59.450838 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-01 22:56:59.450863 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-01 22:56:59.450871 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-01 22:56:59.450877 | orchestrator | 2025-06-01 22:56:59.450883 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-01 22:56:59.450889 | orchestrator | Sunday 01 June 2025 22:52:07 +0000 (0:00:24.140) 0:05:52.551 *********** 2025-06-01 22:56:59.450896 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-01 22:56:59.450902 | orchestrator | 2025-06-01 22:56:59.450908 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-01 22:56:59.450914 | orchestrator | Sunday 01 June 2025 22:52:09 +0000 (0:00:01.622) 0:05:54.174 *********** 2025-06-01 22:56:59.450920 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.450927 | orchestrator | 2025-06-01 22:56:59.450933 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-01 22:56:59.450939 | orchestrator | Sunday 01 June 2025 22:52:10 +0000 (0:00:01.037) 0:05:55.212 *********** 2025-06-01 22:56:59.450945 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.450951 | orchestrator | 2025-06-01 22:56:59.450958 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-01 22:56:59.450964 | orchestrator | Sunday 01 June 2025 22:52:10 +0000 (0:00:00.159) 0:05:55.372 *********** 2025-06-01 22:56:59.450970 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-01 22:56:59.450980 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-01 22:56:59.450986 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-01 22:56:59.450993 | orchestrator | 2025-06-01 22:56:59.450999 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-01 22:56:59.451005 | orchestrator | Sunday 01 June 2025 22:52:16 +0000 (0:00:06.273) 0:06:01.645 *********** 2025-06-01 22:56:59.451011 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-01 22:56:59.451018 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-01 22:56:59.451024 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-01 22:56:59.451030 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-01 22:56:59.451036 | orchestrator | 2025-06-01 22:56:59.451042 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 22:56:59.451049 | orchestrator | Sunday 01 June 2025 22:52:21 +0000 (0:00:04.758) 0:06:06.404 *********** 2025-06-01 22:56:59.451055 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.451061 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.451072 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.451078 | orchestrator | 2025-06-01 22:56:59.451085 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-01 22:56:59.451091 | orchestrator | Sunday 01 June 2025 22:52:22 +0000 (0:00:01.049) 0:06:07.453 *********** 2025-06-01 22:56:59.451097 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:56:59.451103 | orchestrator | 2025-06-01 22:56:59.451109 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-01 22:56:59.451115 | orchestrator | Sunday 01 June 2025 22:52:23 +0000 (0:00:00.636) 0:06:08.089 *********** 2025-06-01 22:56:59.451121 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.451127 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.451133 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.451140 | orchestrator | 2025-06-01 22:56:59.451146 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-01 22:56:59.451152 | orchestrator | Sunday 01 June 2025 22:52:23 +0000 (0:00:00.338) 0:06:08.428 *********** 2025-06-01 22:56:59.451158 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.451164 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.451170 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.451176 | orchestrator | 2025-06-01 22:56:59.451183 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-01 22:56:59.451189 | orchestrator | Sunday 01 June 2025 22:52:25 +0000 (0:00:01.628) 0:06:10.057 *********** 2025-06-01 22:56:59.451195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 22:56:59.451201 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 22:56:59.451207 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 22:56:59.451214 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.451220 | orchestrator | 2025-06-01 22:56:59.451226 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-01 22:56:59.451232 | orchestrator | Sunday 01 June 2025 22:52:26 +0000 (0:00:00.854) 0:06:10.911 *********** 2025-06-01 22:56:59.451238 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.451244 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.451251 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.451257 | orchestrator | 2025-06-01 22:56:59.451263 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-01 22:56:59.451269 | orchestrator | 2025-06-01 22:56:59.451275 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 22:56:59.451281 | orchestrator | Sunday 01 June 2025 22:52:26 +0000 (0:00:00.606) 0:06:11.518 *********** 2025-06-01 22:56:59.451287 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.451294 | orchestrator | 2025-06-01 22:56:59.451300 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 22:56:59.451306 | orchestrator | Sunday 01 June 2025 22:52:27 +0000 (0:00:00.795) 0:06:12.313 *********** 2025-06-01 22:56:59.451312 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.451318 | orchestrator | 2025-06-01 22:56:59.451343 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 22:56:59.451350 | orchestrator | Sunday 01 June 2025 22:52:28 +0000 (0:00:00.559) 0:06:12.873 *********** 2025-06-01 22:56:59.451356 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.451362 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.451369 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.451375 | orchestrator | 2025-06-01 22:56:59.451381 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 22:56:59.451387 | orchestrator | Sunday 01 June 2025 22:52:28 +0000 (0:00:00.311) 0:06:13.184 *********** 2025-06-01 22:56:59.451393 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.451404 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.451410 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.451416 | orchestrator | 2025-06-01 22:56:59.451422 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 22:56:59.451428 | orchestrator | Sunday 01 June 2025 22:52:29 +0000 (0:00:00.990) 0:06:14.175 *********** 2025-06-01 22:56:59.451434 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.451440 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.451446 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.451452 | orchestrator | 2025-06-01 22:56:59.451459 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 22:56:59.451465 | orchestrator | Sunday 01 June 2025 22:52:30 +0000 (0:00:00.721) 0:06:14.896 *********** 2025-06-01 22:56:59.451471 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.451477 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.451483 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.451489 | orchestrator | 2025-06-01 22:56:59.451499 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 22:56:59.451505 | orchestrator | Sunday 01 June 2025 22:52:30 +0000 (0:00:00.667) 0:06:15.564 *********** 2025-06-01 22:56:59.451511 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.451517 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.451524 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.451530 | orchestrator | 2025-06-01 22:56:59.451536 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 22:56:59.451542 | orchestrator | Sunday 01 June 2025 22:52:30 +0000 (0:00:00.293) 0:06:15.858 *********** 2025-06-01 22:56:59.451548 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.451554 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.451560 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.451566 | orchestrator | 2025-06-01 22:56:59.451573 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 22:56:59.451579 | orchestrator | Sunday 01 June 2025 22:52:31 +0000 (0:00:00.632) 0:06:16.490 *********** 2025-06-01 22:56:59.451585 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.451591 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.451597 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.451603 | orchestrator | 2025-06-01 22:56:59.451609 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 22:56:59.451616 | orchestrator | Sunday 01 June 2025 22:52:31 +0000 (0:00:00.326) 0:06:16.816 *********** 2025-06-01 22:56:59.451622 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.451628 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.451634 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.451640 | orchestrator | 2025-06-01 22:56:59.451646 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 22:56:59.451652 | orchestrator | Sunday 01 June 2025 22:52:32 +0000 (0:00:00.645) 0:06:17.462 *********** 2025-06-01 22:56:59.451659 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.451677 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.451683 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.451689 | orchestrator | 2025-06-01 22:56:59.451695 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 22:56:59.451702 | orchestrator | Sunday 01 June 2025 22:52:33 +0000 (0:00:00.691) 0:06:18.154 *********** 2025-06-01 22:56:59.451708 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.451714 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.451720 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.451726 | orchestrator | 2025-06-01 22:56:59.451732 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 22:56:59.451738 | orchestrator | Sunday 01 June 2025 22:52:33 +0000 (0:00:00.595) 0:06:18.750 *********** 2025-06-01 22:56:59.451744 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.451750 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.451756 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.451767 | orchestrator | 2025-06-01 22:56:59.451774 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 22:56:59.451780 | orchestrator | Sunday 01 June 2025 22:52:34 +0000 (0:00:00.379) 0:06:19.129 *********** 2025-06-01 22:56:59.451786 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.451792 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.451798 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.451804 | orchestrator | 2025-06-01 22:56:59.451810 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 22:56:59.451816 | orchestrator | Sunday 01 June 2025 22:52:34 +0000 (0:00:00.389) 0:06:19.518 *********** 2025-06-01 22:56:59.451823 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.451829 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.451835 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.451841 | orchestrator | 2025-06-01 22:56:59.451847 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 22:56:59.451853 | orchestrator | Sunday 01 June 2025 22:52:34 +0000 (0:00:00.309) 0:06:19.827 *********** 2025-06-01 22:56:59.451859 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.451865 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.451871 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.451878 | orchestrator | 2025-06-01 22:56:59.451884 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 22:56:59.451890 | orchestrator | Sunday 01 June 2025 22:52:35 +0000 (0:00:00.616) 0:06:20.444 *********** 2025-06-01 22:56:59.451896 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.451902 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.451908 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.451914 | orchestrator | 2025-06-01 22:56:59.451921 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 22:56:59.451930 | orchestrator | Sunday 01 June 2025 22:52:35 +0000 (0:00:00.328) 0:06:20.773 *********** 2025-06-01 22:56:59.451936 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.451942 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.451948 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.451954 | orchestrator | 2025-06-01 22:56:59.451961 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 22:56:59.451967 | orchestrator | Sunday 01 June 2025 22:52:36 +0000 (0:00:00.293) 0:06:21.066 *********** 2025-06-01 22:56:59.451973 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.451979 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.451985 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.451991 | orchestrator | 2025-06-01 22:56:59.451997 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 22:56:59.452004 | orchestrator | Sunday 01 June 2025 22:52:36 +0000 (0:00:00.293) 0:06:21.360 *********** 2025-06-01 22:56:59.452010 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.452016 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.452022 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.452028 | orchestrator | 2025-06-01 22:56:59.452034 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 22:56:59.452040 | orchestrator | Sunday 01 June 2025 22:52:37 +0000 (0:00:00.598) 0:06:21.958 *********** 2025-06-01 22:56:59.452046 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.452052 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.452058 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.452064 | orchestrator | 2025-06-01 22:56:59.452070 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-01 22:56:59.452080 | orchestrator | Sunday 01 June 2025 22:52:37 +0000 (0:00:00.519) 0:06:22.477 *********** 2025-06-01 22:56:59.452086 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.452092 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.452098 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.452104 | orchestrator | 2025-06-01 22:56:59.452110 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-01 22:56:59.452121 | orchestrator | Sunday 01 June 2025 22:52:37 +0000 (0:00:00.336) 0:06:22.814 *********** 2025-06-01 22:56:59.452127 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 22:56:59.452133 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 22:56:59.452140 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 22:56:59.452146 | orchestrator | 2025-06-01 22:56:59.452152 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-01 22:56:59.452158 | orchestrator | Sunday 01 June 2025 22:52:38 +0000 (0:00:00.880) 0:06:23.695 *********** 2025-06-01 22:56:59.452164 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.452170 | orchestrator | 2025-06-01 22:56:59.452176 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-01 22:56:59.452182 | orchestrator | Sunday 01 June 2025 22:52:39 +0000 (0:00:00.810) 0:06:24.505 *********** 2025-06-01 22:56:59.452188 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.452195 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.452201 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.452207 | orchestrator | 2025-06-01 22:56:59.452213 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-01 22:56:59.452219 | orchestrator | Sunday 01 June 2025 22:52:39 +0000 (0:00:00.328) 0:06:24.833 *********** 2025-06-01 22:56:59.452225 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.452231 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.452238 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.452244 | orchestrator | 2025-06-01 22:56:59.452250 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-01 22:56:59.452256 | orchestrator | Sunday 01 June 2025 22:52:40 +0000 (0:00:00.323) 0:06:25.156 *********** 2025-06-01 22:56:59.452262 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.452268 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.452274 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.452280 | orchestrator | 2025-06-01 22:56:59.452287 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-01 22:56:59.452293 | orchestrator | Sunday 01 June 2025 22:52:41 +0000 (0:00:00.852) 0:06:26.009 *********** 2025-06-01 22:56:59.452299 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.452305 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.452311 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.452317 | orchestrator | 2025-06-01 22:56:59.452323 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-01 22:56:59.452329 | orchestrator | Sunday 01 June 2025 22:52:41 +0000 (0:00:00.316) 0:06:26.325 *********** 2025-06-01 22:56:59.452336 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-01 22:56:59.452342 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-01 22:56:59.452348 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-01 22:56:59.452355 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-01 22:56:59.452361 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-01 22:56:59.452367 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-01 22:56:59.452373 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-01 22:56:59.452379 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-01 22:56:59.452385 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-01 22:56:59.452395 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-01 22:56:59.452406 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-01 22:56:59.452412 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-01 22:56:59.452418 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-01 22:56:59.452424 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-01 22:56:59.452430 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-01 22:56:59.452437 | orchestrator | 2025-06-01 22:56:59.452443 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-01 22:56:59.452449 | orchestrator | Sunday 01 June 2025 22:52:44 +0000 (0:00:02.833) 0:06:29.159 *********** 2025-06-01 22:56:59.452455 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.452461 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.452467 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.452473 | orchestrator | 2025-06-01 22:56:59.452480 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-01 22:56:59.452486 | orchestrator | Sunday 01 June 2025 22:52:44 +0000 (0:00:00.331) 0:06:29.491 *********** 2025-06-01 22:56:59.452492 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.452498 | orchestrator | 2025-06-01 22:56:59.452508 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-01 22:56:59.452515 | orchestrator | Sunday 01 June 2025 22:52:45 +0000 (0:00:00.827) 0:06:30.319 *********** 2025-06-01 22:56:59.452521 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-01 22:56:59.452527 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-01 22:56:59.452533 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-01 22:56:59.452539 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-01 22:56:59.452545 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-01 22:56:59.452551 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-01 22:56:59.452557 | orchestrator | 2025-06-01 22:56:59.452563 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-01 22:56:59.452570 | orchestrator | Sunday 01 June 2025 22:52:46 +0000 (0:00:01.040) 0:06:31.359 *********** 2025-06-01 22:56:59.452576 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.452582 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 22:56:59.452588 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 22:56:59.452594 | orchestrator | 2025-06-01 22:56:59.452600 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-01 22:56:59.452606 | orchestrator | Sunday 01 June 2025 22:52:48 +0000 (0:00:02.083) 0:06:33.443 *********** 2025-06-01 22:56:59.452612 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 22:56:59.452618 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 22:56:59.452625 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.452631 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 22:56:59.452637 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-01 22:56:59.452643 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.452649 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 22:56:59.452655 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-01 22:56:59.452689 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.452696 | orchestrator | 2025-06-01 22:56:59.452702 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-01 22:56:59.452709 | orchestrator | Sunday 01 June 2025 22:52:50 +0000 (0:00:01.575) 0:06:35.018 *********** 2025-06-01 22:56:59.452715 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 22:56:59.452726 | orchestrator | 2025-06-01 22:56:59.452732 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-01 22:56:59.452738 | orchestrator | Sunday 01 June 2025 22:52:52 +0000 (0:00:02.129) 0:06:37.147 *********** 2025-06-01 22:56:59.452744 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.452751 | orchestrator | 2025-06-01 22:56:59.452757 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-01 22:56:59.452763 | orchestrator | Sunday 01 June 2025 22:52:52 +0000 (0:00:00.539) 0:06:37.687 *********** 2025-06-01 22:56:59.452769 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-836f126b-3930-552c-8c28-37312a7074e3', 'data_vg': 'ceph-836f126b-3930-552c-8c28-37312a7074e3'}) 2025-06-01 22:56:59.452777 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-656e26cc-5762-5518-9587-501a37b6e3ae', 'data_vg': 'ceph-656e26cc-5762-5518-9587-501a37b6e3ae'}) 2025-06-01 22:56:59.452783 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-83360607-213f-5c54-ae9b-aa580894d048', 'data_vg': 'ceph-83360607-213f-5c54-ae9b-aa580894d048'}) 2025-06-01 22:56:59.452789 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-04cd8323-667e-5571-83c4-b35d38a67016', 'data_vg': 'ceph-04cd8323-667e-5571-83c4-b35d38a67016'}) 2025-06-01 22:56:59.452795 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c', 'data_vg': 'ceph-154be1eb-c9a2-50db-b9e4-8c9f064a0b1c'}) 2025-06-01 22:56:59.452805 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c033fef4-2688-55e0-9ca7-53dbc156bc4e', 'data_vg': 'ceph-c033fef4-2688-55e0-9ca7-53dbc156bc4e'}) 2025-06-01 22:56:59.452811 | orchestrator | 2025-06-01 22:56:59.452818 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-01 22:56:59.452824 | orchestrator | Sunday 01 June 2025 22:53:34 +0000 (0:00:41.866) 0:07:19.554 *********** 2025-06-01 22:56:59.452830 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.452836 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.452842 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.452848 | orchestrator | 2025-06-01 22:56:59.452855 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-01 22:56:59.452861 | orchestrator | Sunday 01 June 2025 22:53:35 +0000 (0:00:00.642) 0:07:20.197 *********** 2025-06-01 22:56:59.452867 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.452873 | orchestrator | 2025-06-01 22:56:59.452879 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-01 22:56:59.452886 | orchestrator | Sunday 01 June 2025 22:53:35 +0000 (0:00:00.521) 0:07:20.718 *********** 2025-06-01 22:56:59.452892 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.452898 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.452904 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.452910 | orchestrator | 2025-06-01 22:56:59.452916 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-01 22:56:59.452926 | orchestrator | Sunday 01 June 2025 22:53:36 +0000 (0:00:00.691) 0:07:21.410 *********** 2025-06-01 22:56:59.452932 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.452938 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.452945 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.452951 | orchestrator | 2025-06-01 22:56:59.452957 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-01 22:56:59.452963 | orchestrator | Sunday 01 June 2025 22:53:39 +0000 (0:00:02.821) 0:07:24.232 *********** 2025-06-01 22:56:59.452969 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.452976 | orchestrator | 2025-06-01 22:56:59.452982 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-01 22:56:59.452992 | orchestrator | Sunday 01 June 2025 22:53:39 +0000 (0:00:00.547) 0:07:24.779 *********** 2025-06-01 22:56:59.452998 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.453004 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.453011 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.453017 | orchestrator | 2025-06-01 22:56:59.453023 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-01 22:56:59.453029 | orchestrator | Sunday 01 June 2025 22:53:41 +0000 (0:00:01.183) 0:07:25.963 *********** 2025-06-01 22:56:59.453035 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.453041 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.453048 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.453054 | orchestrator | 2025-06-01 22:56:59.453060 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-01 22:56:59.453066 | orchestrator | Sunday 01 June 2025 22:53:42 +0000 (0:00:01.434) 0:07:27.397 *********** 2025-06-01 22:56:59.453072 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.453078 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.453084 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.453090 | orchestrator | 2025-06-01 22:56:59.453097 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-01 22:56:59.453103 | orchestrator | Sunday 01 June 2025 22:53:44 +0000 (0:00:01.857) 0:07:29.255 *********** 2025-06-01 22:56:59.453109 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453115 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.453121 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.453127 | orchestrator | 2025-06-01 22:56:59.453133 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-01 22:56:59.453139 | orchestrator | Sunday 01 June 2025 22:53:44 +0000 (0:00:00.324) 0:07:29.580 *********** 2025-06-01 22:56:59.453144 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453149 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.453155 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.453160 | orchestrator | 2025-06-01 22:56:59.453165 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-01 22:56:59.453171 | orchestrator | Sunday 01 June 2025 22:53:45 +0000 (0:00:00.304) 0:07:29.884 *********** 2025-06-01 22:56:59.453176 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-06-01 22:56:59.453181 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-06-01 22:56:59.453187 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-06-01 22:56:59.453192 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-01 22:56:59.453197 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-01 22:56:59.453203 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-01 22:56:59.453208 | orchestrator | 2025-06-01 22:56:59.453214 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-01 22:56:59.453219 | orchestrator | Sunday 01 June 2025 22:53:46 +0000 (0:00:01.454) 0:07:31.339 *********** 2025-06-01 22:56:59.453225 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-06-01 22:56:59.453230 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-06-01 22:56:59.453236 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-01 22:56:59.453241 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-01 22:56:59.453246 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-01 22:56:59.453252 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-01 22:56:59.453257 | orchestrator | 2025-06-01 22:56:59.453263 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-01 22:56:59.453268 | orchestrator | Sunday 01 June 2025 22:53:48 +0000 (0:00:02.231) 0:07:33.570 *********** 2025-06-01 22:56:59.453273 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-06-01 22:56:59.453279 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-06-01 22:56:59.453284 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-01 22:56:59.453289 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-01 22:56:59.453297 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-01 22:56:59.453310 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-01 22:56:59.453315 | orchestrator | 2025-06-01 22:56:59.453321 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-01 22:56:59.453326 | orchestrator | Sunday 01 June 2025 22:53:52 +0000 (0:00:03.788) 0:07:37.358 *********** 2025-06-01 22:56:59.453331 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453337 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.453342 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-01 22:56:59.453348 | orchestrator | 2025-06-01 22:56:59.453353 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-01 22:56:59.453358 | orchestrator | Sunday 01 June 2025 22:53:55 +0000 (0:00:03.005) 0:07:40.364 *********** 2025-06-01 22:56:59.453364 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453369 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.453374 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-01 22:56:59.453380 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-01 22:56:59.453385 | orchestrator | 2025-06-01 22:56:59.453391 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-01 22:56:59.453396 | orchestrator | Sunday 01 June 2025 22:54:08 +0000 (0:00:12.868) 0:07:53.233 *********** 2025-06-01 22:56:59.453402 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453407 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.453415 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.453421 | orchestrator | 2025-06-01 22:56:59.453426 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 22:56:59.453432 | orchestrator | Sunday 01 June 2025 22:54:09 +0000 (0:00:00.835) 0:07:54.069 *********** 2025-06-01 22:56:59.453437 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453443 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.453448 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.453453 | orchestrator | 2025-06-01 22:56:59.453458 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-01 22:56:59.453464 | orchestrator | Sunday 01 June 2025 22:54:09 +0000 (0:00:00.629) 0:07:54.698 *********** 2025-06-01 22:56:59.453469 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.453475 | orchestrator | 2025-06-01 22:56:59.453480 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-01 22:56:59.453485 | orchestrator | Sunday 01 June 2025 22:54:10 +0000 (0:00:00.548) 0:07:55.247 *********** 2025-06-01 22:56:59.453491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:56:59.453496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:56:59.453502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:56:59.453507 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453512 | orchestrator | 2025-06-01 22:56:59.453518 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-01 22:56:59.453523 | orchestrator | Sunday 01 June 2025 22:54:10 +0000 (0:00:00.431) 0:07:55.679 *********** 2025-06-01 22:56:59.453529 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453534 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.453539 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.453545 | orchestrator | 2025-06-01 22:56:59.453550 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-01 22:56:59.453556 | orchestrator | Sunday 01 June 2025 22:54:11 +0000 (0:00:00.298) 0:07:55.978 *********** 2025-06-01 22:56:59.453561 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453566 | orchestrator | 2025-06-01 22:56:59.453572 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-01 22:56:59.453577 | orchestrator | Sunday 01 June 2025 22:54:11 +0000 (0:00:00.202) 0:07:56.180 *********** 2025-06-01 22:56:59.453587 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453592 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.453597 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.453603 | orchestrator | 2025-06-01 22:56:59.453608 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-01 22:56:59.453613 | orchestrator | Sunday 01 June 2025 22:54:11 +0000 (0:00:00.600) 0:07:56.780 *********** 2025-06-01 22:56:59.453619 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453624 | orchestrator | 2025-06-01 22:56:59.453629 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-01 22:56:59.453635 | orchestrator | Sunday 01 June 2025 22:54:12 +0000 (0:00:00.295) 0:07:57.076 *********** 2025-06-01 22:56:59.453640 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453645 | orchestrator | 2025-06-01 22:56:59.453651 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-01 22:56:59.453656 | orchestrator | Sunday 01 June 2025 22:54:12 +0000 (0:00:00.215) 0:07:57.291 *********** 2025-06-01 22:56:59.453674 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453680 | orchestrator | 2025-06-01 22:56:59.453685 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-01 22:56:59.453690 | orchestrator | Sunday 01 June 2025 22:54:12 +0000 (0:00:00.128) 0:07:57.420 *********** 2025-06-01 22:56:59.453696 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453701 | orchestrator | 2025-06-01 22:56:59.453707 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-01 22:56:59.453712 | orchestrator | Sunday 01 June 2025 22:54:12 +0000 (0:00:00.215) 0:07:57.636 *********** 2025-06-01 22:56:59.453717 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453723 | orchestrator | 2025-06-01 22:56:59.453728 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-01 22:56:59.453734 | orchestrator | Sunday 01 June 2025 22:54:12 +0000 (0:00:00.218) 0:07:57.854 *********** 2025-06-01 22:56:59.453739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:56:59.453744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:56:59.453753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:56:59.453758 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453764 | orchestrator | 2025-06-01 22:56:59.453769 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-01 22:56:59.453775 | orchestrator | Sunday 01 June 2025 22:54:13 +0000 (0:00:00.385) 0:07:58.239 *********** 2025-06-01 22:56:59.453780 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453785 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.453791 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.453796 | orchestrator | 2025-06-01 22:56:59.453802 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-01 22:56:59.453807 | orchestrator | Sunday 01 June 2025 22:54:13 +0000 (0:00:00.313) 0:07:58.553 *********** 2025-06-01 22:56:59.453812 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453818 | orchestrator | 2025-06-01 22:56:59.453823 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-01 22:56:59.453828 | orchestrator | Sunday 01 June 2025 22:54:14 +0000 (0:00:00.806) 0:07:59.360 *********** 2025-06-01 22:56:59.453834 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453839 | orchestrator | 2025-06-01 22:56:59.453845 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-01 22:56:59.453850 | orchestrator | 2025-06-01 22:56:59.453856 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 22:56:59.453861 | orchestrator | Sunday 01 June 2025 22:54:15 +0000 (0:00:00.672) 0:08:00.033 *********** 2025-06-01 22:56:59.453867 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.453878 | orchestrator | 2025-06-01 22:56:59.453883 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 22:56:59.453889 | orchestrator | Sunday 01 June 2025 22:54:16 +0000 (0:00:01.262) 0:08:01.295 *********** 2025-06-01 22:56:59.453895 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.453900 | orchestrator | 2025-06-01 22:56:59.453905 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 22:56:59.453911 | orchestrator | Sunday 01 June 2025 22:54:17 +0000 (0:00:01.235) 0:08:02.531 *********** 2025-06-01 22:56:59.453916 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.453921 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.453927 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.453932 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.453937 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.453943 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.453948 | orchestrator | 2025-06-01 22:56:59.453954 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 22:56:59.453959 | orchestrator | Sunday 01 June 2025 22:54:18 +0000 (0:00:00.995) 0:08:03.526 *********** 2025-06-01 22:56:59.453964 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.453970 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.453975 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.453980 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.453986 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.453991 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.453996 | orchestrator | 2025-06-01 22:56:59.454002 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 22:56:59.454007 | orchestrator | Sunday 01 June 2025 22:54:19 +0000 (0:00:00.986) 0:08:04.513 *********** 2025-06-01 22:56:59.454013 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.454033 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.454039 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.454044 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.454049 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.454055 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.454060 | orchestrator | 2025-06-01 22:56:59.454065 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 22:56:59.454071 | orchestrator | Sunday 01 June 2025 22:54:20 +0000 (0:00:01.318) 0:08:05.831 *********** 2025-06-01 22:56:59.454076 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.454082 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.454087 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.454093 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.454098 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.454103 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.454109 | orchestrator | 2025-06-01 22:56:59.454114 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 22:56:59.454120 | orchestrator | Sunday 01 June 2025 22:54:21 +0000 (0:00:01.017) 0:08:06.849 *********** 2025-06-01 22:56:59.454125 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.454130 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.454136 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.454141 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.454147 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.454152 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.454157 | orchestrator | 2025-06-01 22:56:59.454163 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 22:56:59.454168 | orchestrator | Sunday 01 June 2025 22:54:22 +0000 (0:00:00.954) 0:08:07.804 *********** 2025-06-01 22:56:59.454174 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.454179 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.454185 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.454194 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.454200 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.454205 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.454210 | orchestrator | 2025-06-01 22:56:59.454216 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 22:56:59.454221 | orchestrator | Sunday 01 June 2025 22:54:23 +0000 (0:00:00.599) 0:08:08.404 *********** 2025-06-01 22:56:59.454244 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.454250 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.454255 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.454263 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.454269 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.454274 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.454280 | orchestrator | 2025-06-01 22:56:59.454285 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 22:56:59.454291 | orchestrator | Sunday 01 June 2025 22:54:24 +0000 (0:00:00.873) 0:08:09.277 *********** 2025-06-01 22:56:59.454296 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.454302 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.454307 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.454313 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.454318 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.454323 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.454329 | orchestrator | 2025-06-01 22:56:59.454334 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 22:56:59.454340 | orchestrator | Sunday 01 June 2025 22:54:25 +0000 (0:00:01.038) 0:08:10.316 *********** 2025-06-01 22:56:59.454345 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.454350 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.454356 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.454361 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.454366 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.454372 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.454377 | orchestrator | 2025-06-01 22:56:59.454382 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 22:56:59.454388 | orchestrator | Sunday 01 June 2025 22:54:26 +0000 (0:00:01.345) 0:08:11.661 *********** 2025-06-01 22:56:59.454393 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.454399 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.454408 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.454413 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.454419 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.454424 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.454429 | orchestrator | 2025-06-01 22:56:59.454435 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 22:56:59.454440 | orchestrator | Sunday 01 June 2025 22:54:27 +0000 (0:00:00.611) 0:08:12.272 *********** 2025-06-01 22:56:59.454446 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.454451 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.454457 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.454462 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.454468 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.454473 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.454478 | orchestrator | 2025-06-01 22:56:59.454484 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 22:56:59.454489 | orchestrator | Sunday 01 June 2025 22:54:28 +0000 (0:00:00.831) 0:08:13.104 *********** 2025-06-01 22:56:59.454495 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.454500 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.454505 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.454511 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.454516 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.454521 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.454527 | orchestrator | 2025-06-01 22:56:59.454532 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 22:56:59.454541 | orchestrator | Sunday 01 June 2025 22:54:28 +0000 (0:00:00.641) 0:08:13.745 *********** 2025-06-01 22:56:59.454547 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.454552 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.454558 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.454563 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.454568 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.454574 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.454579 | orchestrator | 2025-06-01 22:56:59.454584 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 22:56:59.454590 | orchestrator | Sunday 01 June 2025 22:54:29 +0000 (0:00:00.928) 0:08:14.674 *********** 2025-06-01 22:56:59.454595 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.454600 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.454606 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.454611 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.454616 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.454621 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.454627 | orchestrator | 2025-06-01 22:56:59.454632 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 22:56:59.454637 | orchestrator | Sunday 01 June 2025 22:54:30 +0000 (0:00:00.630) 0:08:15.304 *********** 2025-06-01 22:56:59.454643 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.454648 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.454653 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.454659 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.454675 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.454680 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.454686 | orchestrator | 2025-06-01 22:56:59.454691 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 22:56:59.454697 | orchestrator | Sunday 01 June 2025 22:54:31 +0000 (0:00:00.759) 0:08:16.064 *********** 2025-06-01 22:56:59.454702 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:56:59.454707 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:56:59.454712 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:56:59.454718 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.454723 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.454728 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.454734 | orchestrator | 2025-06-01 22:56:59.454739 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 22:56:59.454744 | orchestrator | Sunday 01 June 2025 22:54:31 +0000 (0:00:00.576) 0:08:16.641 *********** 2025-06-01 22:56:59.454750 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.454755 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.454760 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.454766 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.454771 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.454776 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.454782 | orchestrator | 2025-06-01 22:56:59.454787 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 22:56:59.454793 | orchestrator | Sunday 01 June 2025 22:54:32 +0000 (0:00:00.837) 0:08:17.478 *********** 2025-06-01 22:56:59.454798 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.454803 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.454812 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.454817 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.454822 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.454828 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.454833 | orchestrator | 2025-06-01 22:56:59.454839 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 22:56:59.454844 | orchestrator | Sunday 01 June 2025 22:54:33 +0000 (0:00:00.635) 0:08:18.114 *********** 2025-06-01 22:56:59.454850 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.454855 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.454864 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.454870 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.454875 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.454880 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.454886 | orchestrator | 2025-06-01 22:56:59.454891 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-01 22:56:59.454896 | orchestrator | Sunday 01 June 2025 22:54:34 +0000 (0:00:01.200) 0:08:19.315 *********** 2025-06-01 22:56:59.454902 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.454907 | orchestrator | 2025-06-01 22:56:59.454912 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-01 22:56:59.454918 | orchestrator | Sunday 01 June 2025 22:54:38 +0000 (0:00:04.006) 0:08:23.321 *********** 2025-06-01 22:56:59.454923 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.454928 | orchestrator | 2025-06-01 22:56:59.454934 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-01 22:56:59.454939 | orchestrator | Sunday 01 June 2025 22:54:40 +0000 (0:00:01.902) 0:08:25.224 *********** 2025-06-01 22:56:59.454944 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.454953 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.454959 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.454964 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.454969 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.454975 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.454980 | orchestrator | 2025-06-01 22:56:59.454985 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-01 22:56:59.454991 | orchestrator | Sunday 01 June 2025 22:54:42 +0000 (0:00:01.793) 0:08:27.018 *********** 2025-06-01 22:56:59.454996 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.455001 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.455007 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.455012 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.455017 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.455022 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.455028 | orchestrator | 2025-06-01 22:56:59.455033 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-01 22:56:59.455038 | orchestrator | Sunday 01 June 2025 22:54:43 +0000 (0:00:01.062) 0:08:28.080 *********** 2025-06-01 22:56:59.455044 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.455050 | orchestrator | 2025-06-01 22:56:59.455055 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-01 22:56:59.455061 | orchestrator | Sunday 01 June 2025 22:54:44 +0000 (0:00:01.336) 0:08:29.417 *********** 2025-06-01 22:56:59.455066 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.455071 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.455077 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.455082 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.455087 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.455093 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.455098 | orchestrator | 2025-06-01 22:56:59.455103 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-01 22:56:59.455109 | orchestrator | Sunday 01 June 2025 22:54:46 +0000 (0:00:02.129) 0:08:31.547 *********** 2025-06-01 22:56:59.455114 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.455119 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.455125 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.455130 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.455135 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.455140 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.455146 | orchestrator | 2025-06-01 22:56:59.455151 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-01 22:56:59.455156 | orchestrator | Sunday 01 June 2025 22:54:50 +0000 (0:00:03.429) 0:08:34.976 *********** 2025-06-01 22:56:59.455166 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.455171 | orchestrator | 2025-06-01 22:56:59.455177 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-01 22:56:59.455182 | orchestrator | Sunday 01 June 2025 22:54:51 +0000 (0:00:01.323) 0:08:36.299 *********** 2025-06-01 22:56:59.455187 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.455193 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.455198 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.455203 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.455209 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.455214 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.455219 | orchestrator | 2025-06-01 22:56:59.455225 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-01 22:56:59.455230 | orchestrator | Sunday 01 June 2025 22:54:52 +0000 (0:00:00.829) 0:08:37.129 *********** 2025-06-01 22:56:59.455235 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:56:59.455241 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:56:59.455246 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:56:59.455251 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.455256 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.455262 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.455267 | orchestrator | 2025-06-01 22:56:59.455272 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-01 22:56:59.455278 | orchestrator | Sunday 01 June 2025 22:54:54 +0000 (0:00:02.189) 0:08:39.318 *********** 2025-06-01 22:56:59.455283 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:56:59.455288 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:56:59.455293 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:56:59.455299 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.455307 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.455312 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.455318 | orchestrator | 2025-06-01 22:56:59.455323 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-01 22:56:59.455328 | orchestrator | 2025-06-01 22:56:59.455334 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 22:56:59.455339 | orchestrator | Sunday 01 June 2025 22:54:55 +0000 (0:00:01.304) 0:08:40.623 *********** 2025-06-01 22:56:59.455345 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.455350 | orchestrator | 2025-06-01 22:56:59.455356 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 22:56:59.455361 | orchestrator | Sunday 01 June 2025 22:54:56 +0000 (0:00:00.543) 0:08:41.166 *********** 2025-06-01 22:56:59.455366 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.455372 | orchestrator | 2025-06-01 22:56:59.455377 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 22:56:59.455383 | orchestrator | Sunday 01 June 2025 22:54:57 +0000 (0:00:00.985) 0:08:42.152 *********** 2025-06-01 22:56:59.455388 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.455393 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.455399 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.455404 | orchestrator | 2025-06-01 22:56:59.455413 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 22:56:59.455419 | orchestrator | Sunday 01 June 2025 22:54:57 +0000 (0:00:00.350) 0:08:42.503 *********** 2025-06-01 22:56:59.455425 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.455430 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.455435 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.455441 | orchestrator | 2025-06-01 22:56:59.455446 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 22:56:59.455456 | orchestrator | Sunday 01 June 2025 22:54:58 +0000 (0:00:00.702) 0:08:43.205 *********** 2025-06-01 22:56:59.455462 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.455467 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.455472 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.455477 | orchestrator | 2025-06-01 22:56:59.455483 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 22:56:59.455488 | orchestrator | Sunday 01 June 2025 22:54:59 +0000 (0:00:01.248) 0:08:44.453 *********** 2025-06-01 22:56:59.455493 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.455499 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.455504 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.455509 | orchestrator | 2025-06-01 22:56:59.455515 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 22:56:59.455520 | orchestrator | Sunday 01 June 2025 22:55:00 +0000 (0:00:00.741) 0:08:45.195 *********** 2025-06-01 22:56:59.455526 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.455531 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.455536 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.455542 | orchestrator | 2025-06-01 22:56:59.455547 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 22:56:59.455552 | orchestrator | Sunday 01 June 2025 22:55:00 +0000 (0:00:00.330) 0:08:45.525 *********** 2025-06-01 22:56:59.455557 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.455563 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.455568 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.455574 | orchestrator | 2025-06-01 22:56:59.455579 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 22:56:59.455584 | orchestrator | Sunday 01 June 2025 22:55:01 +0000 (0:00:00.363) 0:08:45.889 *********** 2025-06-01 22:56:59.455590 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.455595 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.455600 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.455605 | orchestrator | 2025-06-01 22:56:59.455611 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 22:56:59.455616 | orchestrator | Sunday 01 June 2025 22:55:01 +0000 (0:00:00.846) 0:08:46.735 *********** 2025-06-01 22:56:59.455621 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.455627 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.455632 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.455637 | orchestrator | 2025-06-01 22:56:59.455643 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 22:56:59.455648 | orchestrator | Sunday 01 June 2025 22:55:02 +0000 (0:00:00.778) 0:08:47.514 *********** 2025-06-01 22:56:59.455653 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.455659 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.455676 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.455681 | orchestrator | 2025-06-01 22:56:59.455686 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 22:56:59.455692 | orchestrator | Sunday 01 June 2025 22:55:03 +0000 (0:00:00.767) 0:08:48.281 *********** 2025-06-01 22:56:59.455697 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.455703 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.455708 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.455714 | orchestrator | 2025-06-01 22:56:59.455719 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 22:56:59.455724 | orchestrator | Sunday 01 June 2025 22:55:03 +0000 (0:00:00.297) 0:08:48.579 *********** 2025-06-01 22:56:59.455730 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.455735 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.455740 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.455746 | orchestrator | 2025-06-01 22:56:59.455751 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 22:56:59.455756 | orchestrator | Sunday 01 June 2025 22:55:04 +0000 (0:00:00.599) 0:08:49.179 *********** 2025-06-01 22:56:59.455766 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.455771 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.455777 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.455782 | orchestrator | 2025-06-01 22:56:59.455787 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 22:56:59.455796 | orchestrator | Sunday 01 June 2025 22:55:04 +0000 (0:00:00.338) 0:08:49.517 *********** 2025-06-01 22:56:59.455801 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.455806 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.455812 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.455817 | orchestrator | 2025-06-01 22:56:59.455823 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 22:56:59.455828 | orchestrator | Sunday 01 June 2025 22:55:04 +0000 (0:00:00.342) 0:08:49.860 *********** 2025-06-01 22:56:59.455833 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.455839 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.455844 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.455849 | orchestrator | 2025-06-01 22:56:59.455855 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 22:56:59.455861 | orchestrator | Sunday 01 June 2025 22:55:05 +0000 (0:00:00.318) 0:08:50.178 *********** 2025-06-01 22:56:59.455866 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.455871 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.455877 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.455882 | orchestrator | 2025-06-01 22:56:59.455888 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 22:56:59.455893 | orchestrator | Sunday 01 June 2025 22:55:05 +0000 (0:00:00.573) 0:08:50.752 *********** 2025-06-01 22:56:59.455898 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.455904 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.455909 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.455914 | orchestrator | 2025-06-01 22:56:59.455920 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 22:56:59.455929 | orchestrator | Sunday 01 June 2025 22:55:06 +0000 (0:00:00.323) 0:08:51.075 *********** 2025-06-01 22:56:59.455934 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.455939 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.455945 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.455950 | orchestrator | 2025-06-01 22:56:59.455956 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 22:56:59.455961 | orchestrator | Sunday 01 June 2025 22:55:06 +0000 (0:00:00.306) 0:08:51.382 *********** 2025-06-01 22:56:59.455966 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.455972 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.455977 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.455982 | orchestrator | 2025-06-01 22:56:59.455988 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 22:56:59.455993 | orchestrator | Sunday 01 June 2025 22:55:06 +0000 (0:00:00.341) 0:08:51.723 *********** 2025-06-01 22:56:59.455999 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.456004 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.456009 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.456015 | orchestrator | 2025-06-01 22:56:59.456020 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-01 22:56:59.456025 | orchestrator | Sunday 01 June 2025 22:55:07 +0000 (0:00:00.799) 0:08:52.523 *********** 2025-06-01 22:56:59.456031 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.456036 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.456042 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-01 22:56:59.456047 | orchestrator | 2025-06-01 22:56:59.456053 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-01 22:56:59.456058 | orchestrator | Sunday 01 June 2025 22:55:08 +0000 (0:00:00.391) 0:08:52.914 *********** 2025-06-01 22:56:59.456064 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 22:56:59.456075 | orchestrator | 2025-06-01 22:56:59.456081 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-01 22:56:59.456086 | orchestrator | Sunday 01 June 2025 22:55:10 +0000 (0:00:01.975) 0:08:54.890 *********** 2025-06-01 22:56:59.456093 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-01 22:56:59.456101 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.456106 | orchestrator | 2025-06-01 22:56:59.456111 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-01 22:56:59.456117 | orchestrator | Sunday 01 June 2025 22:55:10 +0000 (0:00:00.223) 0:08:55.113 *********** 2025-06-01 22:56:59.456124 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 22:56:59.456135 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 22:56:59.456140 | orchestrator | 2025-06-01 22:56:59.456146 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-01 22:56:59.456151 | orchestrator | Sunday 01 June 2025 22:55:18 +0000 (0:00:07.765) 0:09:02.879 *********** 2025-06-01 22:56:59.456156 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 22:56:59.456162 | orchestrator | 2025-06-01 22:56:59.456167 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-01 22:56:59.456172 | orchestrator | Sunday 01 June 2025 22:55:21 +0000 (0:00:03.505) 0:09:06.384 *********** 2025-06-01 22:56:59.456178 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.456183 | orchestrator | 2025-06-01 22:56:59.456188 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-01 22:56:59.456196 | orchestrator | Sunday 01 June 2025 22:55:22 +0000 (0:00:00.615) 0:09:07.000 *********** 2025-06-01 22:56:59.456202 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-01 22:56:59.456207 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-01 22:56:59.456213 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-01 22:56:59.456218 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-01 22:56:59.456223 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-01 22:56:59.456229 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-01 22:56:59.456234 | orchestrator | 2025-06-01 22:56:59.456240 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-01 22:56:59.456245 | orchestrator | Sunday 01 June 2025 22:55:23 +0000 (0:00:01.105) 0:09:08.105 *********** 2025-06-01 22:56:59.456250 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.456256 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 22:56:59.456261 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 22:56:59.456267 | orchestrator | 2025-06-01 22:56:59.456272 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-01 22:56:59.456277 | orchestrator | Sunday 01 June 2025 22:55:25 +0000 (0:00:02.422) 0:09:10.528 *********** 2025-06-01 22:56:59.456286 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 22:56:59.456292 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 22:56:59.456302 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.456307 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 22:56:59.456312 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-01 22:56:59.456318 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.456323 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 22:56:59.456328 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-01 22:56:59.456334 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.456339 | orchestrator | 2025-06-01 22:56:59.456344 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-01 22:56:59.456350 | orchestrator | Sunday 01 June 2025 22:55:27 +0000 (0:00:01.779) 0:09:12.307 *********** 2025-06-01 22:56:59.456355 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.456360 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.456366 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.456371 | orchestrator | 2025-06-01 22:56:59.456376 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-01 22:56:59.456382 | orchestrator | Sunday 01 June 2025 22:55:30 +0000 (0:00:02.768) 0:09:15.075 *********** 2025-06-01 22:56:59.456387 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.456393 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.456398 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.456403 | orchestrator | 2025-06-01 22:56:59.456408 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-01 22:56:59.456414 | orchestrator | Sunday 01 June 2025 22:55:30 +0000 (0:00:00.375) 0:09:15.451 *********** 2025-06-01 22:56:59.456419 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.456424 | orchestrator | 2025-06-01 22:56:59.456430 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-01 22:56:59.456435 | orchestrator | Sunday 01 June 2025 22:55:31 +0000 (0:00:00.806) 0:09:16.258 *********** 2025-06-01 22:56:59.456440 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.456446 | orchestrator | 2025-06-01 22:56:59.456451 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-01 22:56:59.456456 | orchestrator | Sunday 01 June 2025 22:55:32 +0000 (0:00:00.612) 0:09:16.870 *********** 2025-06-01 22:56:59.456462 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.456467 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.456472 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.456478 | orchestrator | 2025-06-01 22:56:59.456483 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-01 22:56:59.456488 | orchestrator | Sunday 01 June 2025 22:55:33 +0000 (0:00:01.319) 0:09:18.190 *********** 2025-06-01 22:56:59.456493 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.456499 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.456504 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.456509 | orchestrator | 2025-06-01 22:56:59.456515 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-01 22:56:59.456520 | orchestrator | Sunday 01 June 2025 22:55:34 +0000 (0:00:01.523) 0:09:19.714 *********** 2025-06-01 22:56:59.456525 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.456531 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.456536 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.456541 | orchestrator | 2025-06-01 22:56:59.456547 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-01 22:56:59.456552 | orchestrator | Sunday 01 June 2025 22:55:36 +0000 (0:00:02.030) 0:09:21.744 *********** 2025-06-01 22:56:59.456557 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.456563 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.456568 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.456573 | orchestrator | 2025-06-01 22:56:59.456579 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-01 22:56:59.456588 | orchestrator | Sunday 01 June 2025 22:55:38 +0000 (0:00:02.088) 0:09:23.832 *********** 2025-06-01 22:56:59.456593 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.456599 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.456604 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.456609 | orchestrator | 2025-06-01 22:56:59.456614 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 22:56:59.456620 | orchestrator | Sunday 01 June 2025 22:55:40 +0000 (0:00:01.672) 0:09:25.505 *********** 2025-06-01 22:56:59.456628 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.456633 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.456639 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.456644 | orchestrator | 2025-06-01 22:56:59.456649 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-01 22:56:59.456655 | orchestrator | Sunday 01 June 2025 22:55:41 +0000 (0:00:00.753) 0:09:26.259 *********** 2025-06-01 22:56:59.456691 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.456698 | orchestrator | 2025-06-01 22:56:59.456703 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-01 22:56:59.456709 | orchestrator | Sunday 01 June 2025 22:55:42 +0000 (0:00:00.894) 0:09:27.153 *********** 2025-06-01 22:56:59.456714 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.456720 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.456725 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.456730 | orchestrator | 2025-06-01 22:56:59.456736 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-01 22:56:59.456741 | orchestrator | Sunday 01 June 2025 22:55:42 +0000 (0:00:00.388) 0:09:27.542 *********** 2025-06-01 22:56:59.456746 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.456752 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.456757 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.456762 | orchestrator | 2025-06-01 22:56:59.456767 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-01 22:56:59.456776 | orchestrator | Sunday 01 June 2025 22:55:44 +0000 (0:00:01.381) 0:09:28.924 *********** 2025-06-01 22:56:59.456782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:56:59.456787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:56:59.456793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:56:59.456798 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.456803 | orchestrator | 2025-06-01 22:56:59.456809 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-01 22:56:59.456814 | orchestrator | Sunday 01 June 2025 22:55:45 +0000 (0:00:01.001) 0:09:29.926 *********** 2025-06-01 22:56:59.456819 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.456825 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.456830 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.456835 | orchestrator | 2025-06-01 22:56:59.456841 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-01 22:56:59.456846 | orchestrator | 2025-06-01 22:56:59.456851 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 22:56:59.456857 | orchestrator | Sunday 01 June 2025 22:55:45 +0000 (0:00:00.849) 0:09:30.775 *********** 2025-06-01 22:56:59.456862 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.456867 | orchestrator | 2025-06-01 22:56:59.456873 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 22:56:59.456878 | orchestrator | Sunday 01 June 2025 22:55:46 +0000 (0:00:00.574) 0:09:31.349 *********** 2025-06-01 22:56:59.456883 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.456889 | orchestrator | 2025-06-01 22:56:59.456899 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 22:56:59.456905 | orchestrator | Sunday 01 June 2025 22:55:47 +0000 (0:00:00.895) 0:09:32.244 *********** 2025-06-01 22:56:59.456910 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.456915 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.456921 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.456926 | orchestrator | 2025-06-01 22:56:59.456932 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 22:56:59.456937 | orchestrator | Sunday 01 June 2025 22:55:47 +0000 (0:00:00.451) 0:09:32.696 *********** 2025-06-01 22:56:59.456942 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.456948 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.456953 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.456958 | orchestrator | 2025-06-01 22:56:59.456964 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 22:56:59.456969 | orchestrator | Sunday 01 June 2025 22:55:48 +0000 (0:00:00.757) 0:09:33.454 *********** 2025-06-01 22:56:59.456974 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.456980 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.456985 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.456990 | orchestrator | 2025-06-01 22:56:59.456996 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 22:56:59.457001 | orchestrator | Sunday 01 June 2025 22:55:49 +0000 (0:00:00.845) 0:09:34.299 *********** 2025-06-01 22:56:59.457006 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.457012 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.457017 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.457022 | orchestrator | 2025-06-01 22:56:59.457027 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 22:56:59.457033 | orchestrator | Sunday 01 June 2025 22:55:50 +0000 (0:00:01.080) 0:09:35.380 *********** 2025-06-01 22:56:59.457038 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.457044 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.457049 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.457054 | orchestrator | 2025-06-01 22:56:59.457060 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 22:56:59.457065 | orchestrator | Sunday 01 June 2025 22:55:50 +0000 (0:00:00.378) 0:09:35.758 *********** 2025-06-01 22:56:59.457070 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.457076 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.457081 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.457086 | orchestrator | 2025-06-01 22:56:59.457092 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 22:56:59.457097 | orchestrator | Sunday 01 June 2025 22:55:51 +0000 (0:00:00.384) 0:09:36.142 *********** 2025-06-01 22:56:59.457103 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.457108 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.457116 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.457122 | orchestrator | 2025-06-01 22:56:59.457127 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 22:56:59.457132 | orchestrator | Sunday 01 June 2025 22:55:51 +0000 (0:00:00.340) 0:09:36.483 *********** 2025-06-01 22:56:59.457138 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.457143 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.457148 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.457154 | orchestrator | 2025-06-01 22:56:59.457159 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 22:56:59.457164 | orchestrator | Sunday 01 June 2025 22:55:52 +0000 (0:00:01.098) 0:09:37.581 *********** 2025-06-01 22:56:59.457169 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.457174 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.457179 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.457183 | orchestrator | 2025-06-01 22:56:59.457188 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 22:56:59.457197 | orchestrator | Sunday 01 June 2025 22:55:53 +0000 (0:00:00.751) 0:09:38.332 *********** 2025-06-01 22:56:59.457201 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.457206 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.457211 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.457216 | orchestrator | 2025-06-01 22:56:59.457220 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 22:56:59.457225 | orchestrator | Sunday 01 June 2025 22:55:53 +0000 (0:00:00.330) 0:09:38.662 *********** 2025-06-01 22:56:59.457230 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.457238 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.457243 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.457248 | orchestrator | 2025-06-01 22:56:59.457253 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 22:56:59.457257 | orchestrator | Sunday 01 June 2025 22:55:54 +0000 (0:00:00.330) 0:09:38.993 *********** 2025-06-01 22:56:59.457262 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.457267 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.457272 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.457276 | orchestrator | 2025-06-01 22:56:59.457281 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 22:56:59.457286 | orchestrator | Sunday 01 June 2025 22:55:54 +0000 (0:00:00.734) 0:09:39.728 *********** 2025-06-01 22:56:59.457291 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.457295 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.457300 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.457305 | orchestrator | 2025-06-01 22:56:59.457309 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 22:56:59.457314 | orchestrator | Sunday 01 June 2025 22:55:55 +0000 (0:00:00.361) 0:09:40.089 *********** 2025-06-01 22:56:59.457319 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.457324 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.457328 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.457333 | orchestrator | 2025-06-01 22:56:59.457338 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 22:56:59.457343 | orchestrator | Sunday 01 June 2025 22:55:55 +0000 (0:00:00.427) 0:09:40.517 *********** 2025-06-01 22:56:59.457347 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.457352 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.457357 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.457362 | orchestrator | 2025-06-01 22:56:59.457366 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 22:56:59.457371 | orchestrator | Sunday 01 June 2025 22:55:55 +0000 (0:00:00.328) 0:09:40.845 *********** 2025-06-01 22:56:59.457376 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.457380 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.457385 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.457390 | orchestrator | 2025-06-01 22:56:59.457395 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 22:56:59.457399 | orchestrator | Sunday 01 June 2025 22:55:56 +0000 (0:00:00.687) 0:09:41.533 *********** 2025-06-01 22:56:59.457404 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.457409 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.457414 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.457418 | orchestrator | 2025-06-01 22:56:59.457423 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 22:56:59.457428 | orchestrator | Sunday 01 June 2025 22:55:56 +0000 (0:00:00.309) 0:09:41.842 *********** 2025-06-01 22:56:59.457432 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.457437 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.457442 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.457447 | orchestrator | 2025-06-01 22:56:59.457451 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 22:56:59.457456 | orchestrator | Sunday 01 June 2025 22:55:57 +0000 (0:00:00.321) 0:09:42.164 *********** 2025-06-01 22:56:59.457465 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.457470 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.457474 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.457479 | orchestrator | 2025-06-01 22:56:59.457484 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-01 22:56:59.457489 | orchestrator | Sunday 01 June 2025 22:55:58 +0000 (0:00:00.796) 0:09:42.960 *********** 2025-06-01 22:56:59.457493 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.457498 | orchestrator | 2025-06-01 22:56:59.457503 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-01 22:56:59.457508 | orchestrator | Sunday 01 June 2025 22:55:58 +0000 (0:00:00.524) 0:09:43.485 *********** 2025-06-01 22:56:59.457512 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.457517 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 22:56:59.457522 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 22:56:59.457527 | orchestrator | 2025-06-01 22:56:59.457531 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-01 22:56:59.457536 | orchestrator | Sunday 01 June 2025 22:56:00 +0000 (0:00:02.140) 0:09:45.626 *********** 2025-06-01 22:56:59.457541 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 22:56:59.457548 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 22:56:59.457553 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-01 22:56:59.457558 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 22:56:59.457563 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.457567 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.457572 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 22:56:59.457577 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-01 22:56:59.457582 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.457586 | orchestrator | 2025-06-01 22:56:59.457591 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-01 22:56:59.457596 | orchestrator | Sunday 01 June 2025 22:56:02 +0000 (0:00:01.439) 0:09:47.065 *********** 2025-06-01 22:56:59.457601 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.457606 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.457610 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.457615 | orchestrator | 2025-06-01 22:56:59.457620 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-01 22:56:59.457625 | orchestrator | Sunday 01 June 2025 22:56:02 +0000 (0:00:00.309) 0:09:47.375 *********** 2025-06-01 22:56:59.457629 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.457634 | orchestrator | 2025-06-01 22:56:59.457639 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-01 22:56:59.457647 | orchestrator | Sunday 01 June 2025 22:56:03 +0000 (0:00:00.548) 0:09:47.923 *********** 2025-06-01 22:56:59.457652 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.457657 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.457676 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.457681 | orchestrator | 2025-06-01 22:56:59.457686 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-01 22:56:59.457691 | orchestrator | Sunday 01 June 2025 22:56:04 +0000 (0:00:01.338) 0:09:49.261 *********** 2025-06-01 22:56:59.457696 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.457705 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-01 22:56:59.457710 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.457714 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-01 22:56:59.457719 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.457724 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-01 22:56:59.457729 | orchestrator | 2025-06-01 22:56:59.457734 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-01 22:56:59.457738 | orchestrator | Sunday 01 June 2025 22:56:08 +0000 (0:00:04.294) 0:09:53.556 *********** 2025-06-01 22:56:59.457743 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.457748 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 22:56:59.457753 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.457757 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 22:56:59.457762 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:56:59.457767 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 22:56:59.457772 | orchestrator | 2025-06-01 22:56:59.457776 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-01 22:56:59.457781 | orchestrator | Sunday 01 June 2025 22:56:11 +0000 (0:00:02.368) 0:09:55.924 *********** 2025-06-01 22:56:59.457786 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 22:56:59.457790 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.457795 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 22:56:59.457800 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.457805 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 22:56:59.457809 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.457814 | orchestrator | 2025-06-01 22:56:59.457819 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-01 22:56:59.457824 | orchestrator | Sunday 01 June 2025 22:56:12 +0000 (0:00:01.274) 0:09:57.199 *********** 2025-06-01 22:56:59.457828 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-01 22:56:59.457833 | orchestrator | 2025-06-01 22:56:59.457838 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-01 22:56:59.457842 | orchestrator | Sunday 01 June 2025 22:56:12 +0000 (0:00:00.213) 0:09:57.413 *********** 2025-06-01 22:56:59.457847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 22:56:59.457855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 22:56:59.457860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 22:56:59.457865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 22:56:59.457870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 22:56:59.457875 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.457880 | orchestrator | 2025-06-01 22:56:59.457884 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-01 22:56:59.457889 | orchestrator | Sunday 01 June 2025 22:56:13 +0000 (0:00:01.135) 0:09:58.548 *********** 2025-06-01 22:56:59.457899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 22:56:59.457904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 22:56:59.457912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 22:56:59.457917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 22:56:59.457921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 22:56:59.457926 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.457931 | orchestrator | 2025-06-01 22:56:59.457936 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-01 22:56:59.457940 | orchestrator | Sunday 01 June 2025 22:56:14 +0000 (0:00:00.587) 0:09:59.136 *********** 2025-06-01 22:56:59.457945 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 22:56:59.457950 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 22:56:59.457955 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 22:56:59.457960 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 22:56:59.457965 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 22:56:59.457970 | orchestrator | 2025-06-01 22:56:59.457974 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-01 22:56:59.457979 | orchestrator | Sunday 01 June 2025 22:56:44 +0000 (0:00:30.647) 0:10:29.783 *********** 2025-06-01 22:56:59.457984 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.457989 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.457994 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.457998 | orchestrator | 2025-06-01 22:56:59.458003 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-01 22:56:59.458008 | orchestrator | Sunday 01 June 2025 22:56:45 +0000 (0:00:00.315) 0:10:30.098 *********** 2025-06-01 22:56:59.458035 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.458041 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.458046 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.458051 | orchestrator | 2025-06-01 22:56:59.458056 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-01 22:56:59.458061 | orchestrator | Sunday 01 June 2025 22:56:45 +0000 (0:00:00.319) 0:10:30.417 *********** 2025-06-01 22:56:59.458065 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.458070 | orchestrator | 2025-06-01 22:56:59.458075 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-01 22:56:59.458080 | orchestrator | Sunday 01 June 2025 22:56:46 +0000 (0:00:00.855) 0:10:31.273 *********** 2025-06-01 22:56:59.458085 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.458090 | orchestrator | 2025-06-01 22:56:59.458094 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-01 22:56:59.458099 | orchestrator | Sunday 01 June 2025 22:56:46 +0000 (0:00:00.537) 0:10:31.810 *********** 2025-06-01 22:56:59.458104 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.458113 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.458118 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.458123 | orchestrator | 2025-06-01 22:56:59.458128 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-01 22:56:59.458132 | orchestrator | Sunday 01 June 2025 22:56:48 +0000 (0:00:01.383) 0:10:33.194 *********** 2025-06-01 22:56:59.458137 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.458142 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.458147 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.458152 | orchestrator | 2025-06-01 22:56:59.458159 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-01 22:56:59.458164 | orchestrator | Sunday 01 June 2025 22:56:49 +0000 (0:00:01.510) 0:10:34.704 *********** 2025-06-01 22:56:59.458169 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:56:59.458174 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:56:59.458179 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:56:59.458183 | orchestrator | 2025-06-01 22:56:59.458188 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-01 22:56:59.458193 | orchestrator | Sunday 01 June 2025 22:56:51 +0000 (0:00:01.902) 0:10:36.607 *********** 2025-06-01 22:56:59.458198 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.458203 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.458207 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 22:56:59.458212 | orchestrator | 2025-06-01 22:56:59.458217 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 22:56:59.458222 | orchestrator | Sunday 01 June 2025 22:56:54 +0000 (0:00:03.155) 0:10:39.763 *********** 2025-06-01 22:56:59.458226 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.458235 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.458239 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.458244 | orchestrator | 2025-06-01 22:56:59.458249 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-01 22:56:59.458254 | orchestrator | Sunday 01 June 2025 22:56:55 +0000 (0:00:00.380) 0:10:40.143 *********** 2025-06-01 22:56:59.458259 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:56:59.458263 | orchestrator | 2025-06-01 22:56:59.458268 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-01 22:56:59.458273 | orchestrator | Sunday 01 June 2025 22:56:55 +0000 (0:00:00.560) 0:10:40.703 *********** 2025-06-01 22:56:59.458278 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.458282 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.458287 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.458292 | orchestrator | 2025-06-01 22:56:59.458297 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-01 22:56:59.458301 | orchestrator | Sunday 01 June 2025 22:56:56 +0000 (0:00:00.591) 0:10:41.295 *********** 2025-06-01 22:56:59.458306 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.458311 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:56:59.458316 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:56:59.458320 | orchestrator | 2025-06-01 22:56:59.458325 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-01 22:56:59.458330 | orchestrator | Sunday 01 June 2025 22:56:56 +0000 (0:00:00.367) 0:10:41.663 *********** 2025-06-01 22:56:59.458335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:56:59.458340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:56:59.458344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:56:59.458355 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:56:59.458359 | orchestrator | 2025-06-01 22:56:59.458364 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-01 22:56:59.458369 | orchestrator | Sunday 01 June 2025 22:56:57 +0000 (0:00:00.611) 0:10:42.275 *********** 2025-06-01 22:56:59.458374 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:56:59.458378 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:56:59.458383 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:56:59.458388 | orchestrator | 2025-06-01 22:56:59.458393 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:56:59.458398 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-06-01 22:56:59.458403 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-01 22:56:59.458407 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-01 22:56:59.458412 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-06-01 22:56:59.458417 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-01 22:56:59.458422 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-01 22:56:59.458426 | orchestrator | 2025-06-01 22:56:59.458431 | orchestrator | 2025-06-01 22:56:59.458436 | orchestrator | 2025-06-01 22:56:59.458441 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:56:59.458445 | orchestrator | Sunday 01 June 2025 22:56:57 +0000 (0:00:00.243) 0:10:42.518 *********** 2025-06-01 22:56:59.458450 | orchestrator | =============================================================================== 2025-06-01 22:56:59.458455 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 58.93s 2025-06-01 22:56:59.458460 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.87s 2025-06-01 22:56:59.458467 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.65s 2025-06-01 22:56:59.458472 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.14s 2025-06-01 22:56:59.458477 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.69s 2025-06-01 22:56:59.458481 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.87s 2025-06-01 22:56:59.458486 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.86s 2025-06-01 22:56:59.458491 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.50s 2025-06-01 22:56:59.458495 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.77s 2025-06-01 22:56:59.458500 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.36s 2025-06-01 22:56:59.458505 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.27s 2025-06-01 22:56:59.458510 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.76s 2025-06-01 22:56:59.458514 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.30s 2025-06-01 22:56:59.458519 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.01s 2025-06-01 22:56:59.458524 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.94s 2025-06-01 22:56:59.458532 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.79s 2025-06-01 22:56:59.458537 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.58s 2025-06-01 22:56:59.458542 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.51s 2025-06-01 22:56:59.458550 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.43s 2025-06-01 22:56:59.458555 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.28s 2025-06-01 22:56:59.458560 | orchestrator | 2025-06-01 22:56:59 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:56:59.458565 | orchestrator | 2025-06-01 22:56:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:02.486500 | orchestrator | 2025-06-01 22:57:02 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:02.488941 | orchestrator | 2025-06-01 22:57:02 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:02.492068 | orchestrator | 2025-06-01 22:57:02 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:02.492298 | orchestrator | 2025-06-01 22:57:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:05.534416 | orchestrator | 2025-06-01 22:57:05 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:05.535745 | orchestrator | 2025-06-01 22:57:05 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:05.537727 | orchestrator | 2025-06-01 22:57:05 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:05.537755 | orchestrator | 2025-06-01 22:57:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:08.581170 | orchestrator | 2025-06-01 22:57:08 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:08.582371 | orchestrator | 2025-06-01 22:57:08 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:08.583730 | orchestrator | 2025-06-01 22:57:08 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:08.583761 | orchestrator | 2025-06-01 22:57:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:11.633222 | orchestrator | 2025-06-01 22:57:11 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:11.634835 | orchestrator | 2025-06-01 22:57:11 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:11.636360 | orchestrator | 2025-06-01 22:57:11 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:11.636383 | orchestrator | 2025-06-01 22:57:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:14.691245 | orchestrator | 2025-06-01 22:57:14 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:14.691347 | orchestrator | 2025-06-01 22:57:14 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:14.692064 | orchestrator | 2025-06-01 22:57:14 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:14.692090 | orchestrator | 2025-06-01 22:57:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:17.740638 | orchestrator | 2025-06-01 22:57:17 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:17.743365 | orchestrator | 2025-06-01 22:57:17 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:17.753603 | orchestrator | 2025-06-01 22:57:17 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:17.753642 | orchestrator | 2025-06-01 22:57:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:20.789904 | orchestrator | 2025-06-01 22:57:20 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:20.790474 | orchestrator | 2025-06-01 22:57:20 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:20.790849 | orchestrator | 2025-06-01 22:57:20 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:20.790873 | orchestrator | 2025-06-01 22:57:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:23.840769 | orchestrator | 2025-06-01 22:57:23 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:23.840918 | orchestrator | 2025-06-01 22:57:23 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:23.841954 | orchestrator | 2025-06-01 22:57:23 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:23.841979 | orchestrator | 2025-06-01 22:57:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:26.907701 | orchestrator | 2025-06-01 22:57:26 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:26.907825 | orchestrator | 2025-06-01 22:57:26 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:26.907840 | orchestrator | 2025-06-01 22:57:26 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:26.907853 | orchestrator | 2025-06-01 22:57:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:29.962533 | orchestrator | 2025-06-01 22:57:29 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:29.965544 | orchestrator | 2025-06-01 22:57:29 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:29.968065 | orchestrator | 2025-06-01 22:57:29 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:29.968274 | orchestrator | 2025-06-01 22:57:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:33.029618 | orchestrator | 2025-06-01 22:57:33 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:33.032509 | orchestrator | 2025-06-01 22:57:33 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:33.034994 | orchestrator | 2025-06-01 22:57:33 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:33.035286 | orchestrator | 2025-06-01 22:57:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:36.082823 | orchestrator | 2025-06-01 22:57:36 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:36.084380 | orchestrator | 2025-06-01 22:57:36 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:36.086479 | orchestrator | 2025-06-01 22:57:36 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:36.086590 | orchestrator | 2025-06-01 22:57:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:39.132917 | orchestrator | 2025-06-01 22:57:39 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:39.134277 | orchestrator | 2025-06-01 22:57:39 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:39.136611 | orchestrator | 2025-06-01 22:57:39 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:39.136778 | orchestrator | 2025-06-01 22:57:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:42.188513 | orchestrator | 2025-06-01 22:57:42 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:42.190306 | orchestrator | 2025-06-01 22:57:42 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:42.192571 | orchestrator | 2025-06-01 22:57:42 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:42.192598 | orchestrator | 2025-06-01 22:57:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:45.236317 | orchestrator | 2025-06-01 22:57:45 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:45.237415 | orchestrator | 2025-06-01 22:57:45 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:45.239942 | orchestrator | 2025-06-01 22:57:45 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:45.239970 | orchestrator | 2025-06-01 22:57:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:48.292509 | orchestrator | 2025-06-01 22:57:48 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:48.294869 | orchestrator | 2025-06-01 22:57:48 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:48.296558 | orchestrator | 2025-06-01 22:57:48 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:48.296725 | orchestrator | 2025-06-01 22:57:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:51.343762 | orchestrator | 2025-06-01 22:57:51 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:51.346588 | orchestrator | 2025-06-01 22:57:51 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:51.348208 | orchestrator | 2025-06-01 22:57:51 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:51.348239 | orchestrator | 2025-06-01 22:57:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:54.391236 | orchestrator | 2025-06-01 22:57:54 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:54.392953 | orchestrator | 2025-06-01 22:57:54 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:54.395100 | orchestrator | 2025-06-01 22:57:54 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:54.395301 | orchestrator | 2025-06-01 22:57:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:57:57.443982 | orchestrator | 2025-06-01 22:57:57 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:57:57.444826 | orchestrator | 2025-06-01 22:57:57 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:57:57.446562 | orchestrator | 2025-06-01 22:57:57 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:57:57.447110 | orchestrator | 2025-06-01 22:57:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:00.503517 | orchestrator | 2025-06-01 22:58:00 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:00.504708 | orchestrator | 2025-06-01 22:58:00 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:58:00.505798 | orchestrator | 2025-06-01 22:58:00 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:58:00.505846 | orchestrator | 2025-06-01 22:58:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:03.559383 | orchestrator | 2025-06-01 22:58:03 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:03.561414 | orchestrator | 2025-06-01 22:58:03 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:58:03.564871 | orchestrator | 2025-06-01 22:58:03 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:58:03.565085 | orchestrator | 2025-06-01 22:58:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:06.623754 | orchestrator | 2025-06-01 22:58:06 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:06.624167 | orchestrator | 2025-06-01 22:58:06 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:58:06.625619 | orchestrator | 2025-06-01 22:58:06 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:58:06.625785 | orchestrator | 2025-06-01 22:58:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:09.668592 | orchestrator | 2025-06-01 22:58:09 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:09.670787 | orchestrator | 2025-06-01 22:58:09 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:58:09.673550 | orchestrator | 2025-06-01 22:58:09 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:58:09.673578 | orchestrator | 2025-06-01 22:58:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:12.714251 | orchestrator | 2025-06-01 22:58:12 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:12.716002 | orchestrator | 2025-06-01 22:58:12 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state STARTED 2025-06-01 22:58:12.717481 | orchestrator | 2025-06-01 22:58:12 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:58:12.717506 | orchestrator | 2025-06-01 22:58:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:15.761233 | orchestrator | 2025-06-01 22:58:15 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:15.768298 | orchestrator | 2025-06-01 22:58:15.768357 | orchestrator | 2025-06-01 22:58:15.768370 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 22:58:15.768383 | orchestrator | 2025-06-01 22:58:15.768394 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 22:58:15.768405 | orchestrator | Sunday 01 June 2025 22:55:16 +0000 (0:00:00.258) 0:00:00.258 *********** 2025-06-01 22:58:15.768416 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:15.768428 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:58:15.768439 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:58:15.768449 | orchestrator | 2025-06-01 22:58:15.768479 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 22:58:15.768490 | orchestrator | Sunday 01 June 2025 22:55:17 +0000 (0:00:00.299) 0:00:00.557 *********** 2025-06-01 22:58:15.768502 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-01 22:58:15.768512 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-01 22:58:15.768543 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-01 22:58:15.768553 | orchestrator | 2025-06-01 22:58:15.768563 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-01 22:58:15.768573 | orchestrator | 2025-06-01 22:58:15.768583 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-01 22:58:15.768592 | orchestrator | Sunday 01 June 2025 22:55:17 +0000 (0:00:00.515) 0:00:01.073 *********** 2025-06-01 22:58:15.768602 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:58:15.768612 | orchestrator | 2025-06-01 22:58:15.768622 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-01 22:58:15.768631 | orchestrator | Sunday 01 June 2025 22:55:18 +0000 (0:00:00.503) 0:00:01.576 *********** 2025-06-01 22:58:15.768698 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 22:58:15.768710 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 22:58:15.768721 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 22:58:15.768730 | orchestrator | 2025-06-01 22:58:15.768740 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-01 22:58:15.768749 | orchestrator | Sunday 01 June 2025 22:55:18 +0000 (0:00:00.716) 0:00:02.292 *********** 2025-06-01 22:58:15.768763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:58:15.768779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:58:15.768805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:58:15.768825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:58:15.768851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:58:15.768863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:58:15.768874 | orchestrator | 2025-06-01 22:58:15.768884 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-01 22:58:15.768894 | orchestrator | Sunday 01 June 2025 22:55:20 +0000 (0:00:01.782) 0:00:04.075 *********** 2025-06-01 22:58:15.768903 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:58:15.768913 | orchestrator | 2025-06-01 22:58:15.768922 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-01 22:58:15.768932 | orchestrator | Sunday 01 June 2025 22:55:21 +0000 (0:00:00.580) 0:00:04.656 *********** 2025-06-01 22:58:15.768949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:58:15.768965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:58:15.768982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:58:15.768993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:58:15.769012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:58:15.769029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:58:15.769046 | orchestrator | 2025-06-01 22:58:15.769057 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-01 22:58:15.769067 | orchestrator | Sunday 01 June 2025 22:55:23 +0000 (0:00:02.746) 0:00:07.403 *********** 2025-06-01 22:58:15.769077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 22:58:15.769087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 22:58:15.769098 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:15.769116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 22:58:15.769131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 22:58:15.769148 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:15.769159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 22:58:15.769170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 22:58:15.769180 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:15.769190 | orchestrator | 2025-06-01 22:58:15.769200 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-01 22:58:15.769210 | orchestrator | Sunday 01 June 2025 22:55:25 +0000 (0:00:01.508) 0:00:08.911 *********** 2025-06-01 22:58:15.769226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 22:58:15.769248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 22:58:15.769259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 22:58:15.769270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 22:58:15.769281 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:15.769291 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:15.769306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 22:58:15.769327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 22:58:15.769338 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:15.769348 | orchestrator | 2025-06-01 22:58:15.769357 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-01 22:58:15.769368 | orchestrator | Sunday 01 June 2025 22:55:26 +0000 (0:00:01.511) 0:00:10.423 *********** 2025-06-01 22:58:15.769378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:58:15.769389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:58:15.769400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:58:15.769433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:58:15.769445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:58:15.769457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:58:15.769467 | orchestrator | 2025-06-01 22:58:15.769477 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-01 22:58:15.769487 | orchestrator | Sunday 01 June 2025 22:55:29 +0000 (0:00:02.360) 0:00:12.783 *********** 2025-06-01 22:58:15.769497 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:58:15.769507 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:58:15.769517 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:15.769527 | orchestrator | 2025-06-01 22:58:15.769537 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-01 22:58:15.769546 | orchestrator | Sunday 01 June 2025 22:55:32 +0000 (0:00:03.392) 0:00:16.176 *********** 2025-06-01 22:58:15.769569 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:15.769579 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:58:15.769588 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:58:15.769598 | orchestrator | 2025-06-01 22:58:15.769608 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-01 22:58:15.769618 | orchestrator | Sunday 01 June 2025 22:55:34 +0000 (0:00:01.736) 0:00:17.912 *********** 2025-06-01 22:58:15.769635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/et2025-06-01 22:58:15 | INFO  | Task 783ae4a6-aae9-463b-8c98-c78efd85099d is in state SUCCESS 2025-06-01 22:58:15.769647 | orchestrator | c/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:58:15.769684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:58:15.769695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 22:58:15.769706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:58:15.769732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:58:15.769749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 22:58:15.769760 | orchestrator | 2025-06-01 22:58:15.769769 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-01 22:58:15.769779 | orchestrator | Sunday 01 June 2025 22:55:36 +0000 (0:00:01.940) 0:00:19.853 *********** 2025-06-01 22:58:15.769789 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:15.769799 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:15.769809 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:15.769819 | orchestrator | 2025-06-01 22:58:15.769829 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-01 22:58:15.769838 | orchestrator | Sunday 01 June 2025 22:55:36 +0000 (0:00:00.319) 0:00:20.172 *********** 2025-06-01 22:58:15.769848 | orchestrator | 2025-06-01 22:58:15.769858 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-01 22:58:15.769867 | orchestrator | Sunday 01 June 2025 22:55:36 +0000 (0:00:00.064) 0:00:20.236 *********** 2025-06-01 22:58:15.769877 | orchestrator | 2025-06-01 22:58:15.769886 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-01 22:58:15.769896 | orchestrator | Sunday 01 June 2025 22:55:36 +0000 (0:00:00.070) 0:00:20.306 *********** 2025-06-01 22:58:15.769906 | orchestrator | 2025-06-01 22:58:15.769915 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-01 22:58:15.769925 | orchestrator | Sunday 01 June 2025 22:55:37 +0000 (0:00:00.267) 0:00:20.574 *********** 2025-06-01 22:58:15.769934 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:15.769944 | orchestrator | 2025-06-01 22:58:15.769954 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-01 22:58:15.769974 | orchestrator | Sunday 01 June 2025 22:55:37 +0000 (0:00:00.227) 0:00:20.802 *********** 2025-06-01 22:58:15.769984 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:15.769994 | orchestrator | 2025-06-01 22:58:15.770003 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-01 22:58:15.770061 | orchestrator | Sunday 01 June 2025 22:55:37 +0000 (0:00:00.239) 0:00:21.041 *********** 2025-06-01 22:58:15.770073 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:15.770083 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:58:15.770092 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:58:15.770102 | orchestrator | 2025-06-01 22:58:15.770112 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-01 22:58:15.770122 | orchestrator | Sunday 01 June 2025 22:56:46 +0000 (0:01:09.001) 0:01:30.043 *********** 2025-06-01 22:58:15.770131 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:15.770141 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:58:15.770151 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:58:15.770160 | orchestrator | 2025-06-01 22:58:15.770170 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-01 22:58:15.770179 | orchestrator | Sunday 01 June 2025 22:58:02 +0000 (0:01:15.500) 0:02:45.544 *********** 2025-06-01 22:58:15.770189 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:58:15.770199 | orchestrator | 2025-06-01 22:58:15.770208 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-01 22:58:15.770218 | orchestrator | Sunday 01 June 2025 22:58:02 +0000 (0:00:00.720) 0:02:46.264 *********** 2025-06-01 22:58:15.770228 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:15.770237 | orchestrator | 2025-06-01 22:58:15.770247 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-01 22:58:15.770256 | orchestrator | Sunday 01 June 2025 22:58:05 +0000 (0:00:02.294) 0:02:48.558 *********** 2025-06-01 22:58:15.770266 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:15.770276 | orchestrator | 2025-06-01 22:58:15.770286 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-01 22:58:15.770295 | orchestrator | Sunday 01 June 2025 22:58:07 +0000 (0:00:02.122) 0:02:50.681 *********** 2025-06-01 22:58:15.770305 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:15.770315 | orchestrator | 2025-06-01 22:58:15.770325 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-01 22:58:15.770341 | orchestrator | Sunday 01 June 2025 22:58:09 +0000 (0:00:02.790) 0:02:53.472 *********** 2025-06-01 22:58:15.770350 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:15.770360 | orchestrator | 2025-06-01 22:58:15.770370 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:58:15.770381 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:58:15.770394 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 22:58:15.770409 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 22:58:15.770419 | orchestrator | 2025-06-01 22:58:15.770429 | orchestrator | 2025-06-01 22:58:15.770439 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:58:15.770448 | orchestrator | Sunday 01 June 2025 22:58:12 +0000 (0:00:02.563) 0:02:56.035 *********** 2025-06-01 22:58:15.770458 | orchestrator | =============================================================================== 2025-06-01 22:58:15.770468 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 75.50s 2025-06-01 22:58:15.770477 | orchestrator | opensearch : Restart opensearch container ------------------------------ 69.00s 2025-06-01 22:58:15.770495 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.39s 2025-06-01 22:58:15.770505 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.79s 2025-06-01 22:58:15.770515 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.75s 2025-06-01 22:58:15.770524 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.56s 2025-06-01 22:58:15.770534 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.36s 2025-06-01 22:58:15.770544 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.29s 2025-06-01 22:58:15.770553 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.12s 2025-06-01 22:58:15.770563 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.94s 2025-06-01 22:58:15.770572 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.78s 2025-06-01 22:58:15.770582 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.74s 2025-06-01 22:58:15.770592 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.51s 2025-06-01 22:58:15.770601 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.51s 2025-06-01 22:58:15.770611 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.72s 2025-06-01 22:58:15.770621 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.72s 2025-06-01 22:58:15.770630 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2025-06-01 22:58:15.770640 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-06-01 22:58:15.770649 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2025-06-01 22:58:15.770686 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.40s 2025-06-01 22:58:15.770696 | orchestrator | 2025-06-01 22:58:15 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:58:15.770706 | orchestrator | 2025-06-01 22:58:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:18.823939 | orchestrator | 2025-06-01 22:58:18 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:18.825086 | orchestrator | 2025-06-01 22:58:18 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:58:18.825115 | orchestrator | 2025-06-01 22:58:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:21.877261 | orchestrator | 2025-06-01 22:58:21 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:21.879391 | orchestrator | 2025-06-01 22:58:21 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:58:21.879441 | orchestrator | 2025-06-01 22:58:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:24.925527 | orchestrator | 2025-06-01 22:58:24 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:24.927255 | orchestrator | 2025-06-01 22:58:24 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:58:24.927300 | orchestrator | 2025-06-01 22:58:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:27.974860 | orchestrator | 2025-06-01 22:58:27 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:27.976222 | orchestrator | 2025-06-01 22:58:27 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state STARTED 2025-06-01 22:58:27.976260 | orchestrator | 2025-06-01 22:58:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:31.028153 | orchestrator | 2025-06-01 22:58:31 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:31.033525 | orchestrator | 2025-06-01 22:58:31.033572 | orchestrator | 2025-06-01 22:58:31.033587 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-01 22:58:31.033601 | orchestrator | 2025-06-01 22:58:31.033619 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-01 22:58:31.033638 | orchestrator | Sunday 01 June 2025 22:55:16 +0000 (0:00:00.099) 0:00:00.099 *********** 2025-06-01 22:58:31.033718 | orchestrator | ok: [localhost] => { 2025-06-01 22:58:31.033738 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-01 22:58:31.033755 | orchestrator | } 2025-06-01 22:58:31.033775 | orchestrator | 2025-06-01 22:58:31.033791 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-01 22:58:31.033819 | orchestrator | Sunday 01 June 2025 22:55:16 +0000 (0:00:00.058) 0:00:00.157 *********** 2025-06-01 22:58:31.033831 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-01 22:58:31.033844 | orchestrator | ...ignoring 2025-06-01 22:58:31.033856 | orchestrator | 2025-06-01 22:58:31.033867 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-01 22:58:31.033878 | orchestrator | Sunday 01 June 2025 22:55:19 +0000 (0:00:02.852) 0:00:03.009 *********** 2025-06-01 22:58:31.033889 | orchestrator | skipping: [localhost] 2025-06-01 22:58:31.033900 | orchestrator | 2025-06-01 22:58:31.034196 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-01 22:58:31.034210 | orchestrator | Sunday 01 June 2025 22:55:19 +0000 (0:00:00.075) 0:00:03.085 *********** 2025-06-01 22:58:31.034222 | orchestrator | ok: [localhost] 2025-06-01 22:58:31.034233 | orchestrator | 2025-06-01 22:58:31.034244 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 22:58:31.034255 | orchestrator | 2025-06-01 22:58:31.034266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 22:58:31.034277 | orchestrator | Sunday 01 June 2025 22:55:19 +0000 (0:00:00.202) 0:00:03.287 *********** 2025-06-01 22:58:31.034288 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:31.034299 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:58:31.034309 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:58:31.034320 | orchestrator | 2025-06-01 22:58:31.034331 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 22:58:31.034342 | orchestrator | Sunday 01 June 2025 22:55:20 +0000 (0:00:00.317) 0:00:03.605 *********** 2025-06-01 22:58:31.034353 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-01 22:58:31.034364 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-01 22:58:31.034375 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-01 22:58:31.034386 | orchestrator | 2025-06-01 22:58:31.034397 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-01 22:58:31.034408 | orchestrator | 2025-06-01 22:58:31.034419 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-01 22:58:31.034430 | orchestrator | Sunday 01 June 2025 22:55:20 +0000 (0:00:00.651) 0:00:04.257 *********** 2025-06-01 22:58:31.034441 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 22:58:31.034453 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-01 22:58:31.034463 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-01 22:58:31.034474 | orchestrator | 2025-06-01 22:58:31.034485 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 22:58:31.034496 | orchestrator | Sunday 01 June 2025 22:55:21 +0000 (0:00:00.406) 0:00:04.664 *********** 2025-06-01 22:58:31.034507 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:58:31.034518 | orchestrator | 2025-06-01 22:58:31.034529 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-01 22:58:31.034540 | orchestrator | Sunday 01 June 2025 22:55:21 +0000 (0:00:00.582) 0:00:05.246 *********** 2025-06-01 22:58:31.034586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 22:58:31.034612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 22:58:31.034626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 22:58:31.034646 | orchestrator | 2025-06-01 22:58:31.034689 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-01 22:58:31.034701 | orchestrator | Sunday 01 June 2025 22:55:25 +0000 (0:00:03.109) 0:00:08.356 *********** 2025-06-01 22:58:31.034713 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.034724 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.034735 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.034745 | orchestrator | 2025-06-01 22:58:31.034756 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-01 22:58:31.034767 | orchestrator | Sunday 01 June 2025 22:55:26 +0000 (0:00:01.432) 0:00:09.788 *********** 2025-06-01 22:58:31.034777 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.034788 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.034799 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.034810 | orchestrator | 2025-06-01 22:58:31.034826 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-01 22:58:31.034837 | orchestrator | Sunday 01 June 2025 22:55:28 +0000 (0:00:01.716) 0:00:11.505 *********** 2025-06-01 22:58:31.034849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 22:58:31.034878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 22:58:31.034896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 22:58:31.034916 | orchestrator | 2025-06-01 22:58:31.034927 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-01 22:58:31.034938 | orchestrator | Sunday 01 June 2025 22:55:32 +0000 (0:00:03.969) 0:00:15.474 *********** 2025-06-01 22:58:31.034949 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.034960 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.034971 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.034982 | orchestrator | 2025-06-01 22:58:31.034993 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-01 22:58:31.035004 | orchestrator | Sunday 01 June 2025 22:55:33 +0000 (0:00:01.158) 0:00:16.633 *********** 2025-06-01 22:58:31.035014 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.035025 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:58:31.035036 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:58:31.035047 | orchestrator | 2025-06-01 22:58:31.035058 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 22:58:31.035069 | orchestrator | Sunday 01 June 2025 22:55:37 +0000 (0:00:04.300) 0:00:20.933 *********** 2025-06-01 22:58:31.035080 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:58:31.035091 | orchestrator | 2025-06-01 22:58:31.035102 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-01 22:58:31.035113 | orchestrator | Sunday 01 June 2025 22:55:38 +0000 (0:00:00.532) 0:00:21.466 *********** 2025-06-01 22:58:31.035138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:58:31.035151 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.035163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:58:31.035182 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:31.035201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:58:31.035213 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.035225 | orchestrator | 2025-06-01 22:58:31.035241 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-01 22:58:31.035251 | orchestrator | Sunday 01 June 2025 22:55:41 +0000 (0:00:03.408) 0:00:24.875 *********** 2025-06-01 22:58:31.035263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:58:31.035282 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.035300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:58:31.035313 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:31.035330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:58:31.035354 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.035366 | orchestrator | 2025-06-01 22:58:31.035376 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-01 22:58:31.035387 | orchestrator | Sunday 01 June 2025 22:55:45 +0000 (0:00:03.561) 0:00:28.436 *********** 2025-06-01 22:58:31.035405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:58:31.035417 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.035434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:58:31.035453 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.035465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 22:58:31.035477 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:31.035488 | orchestrator | 2025-06-01 22:58:31.035498 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-01 22:58:31.035509 | orchestrator | Sunday 01 June 2025 22:55:48 +0000 (0:00:03.411) 0:00:31.847 *********** 2025-06-01 22:58:31.035526 | orchestrator | ch2025-06-01 22:58:31 | INFO  | Task 1f579d52-7bbd-4023-a596-26a7b10604b8 is in state SUCCESS 2025-06-01 22:58:31.035538 | orchestrator | 2025-06-01 22:58:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:31.035554 | orchestrator | anged: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 22:58:31.035573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 22:58:31.035601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 22:58:31.035620 | orchestrator | 2025-06-01 22:58:31.035631 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-01 22:58:31.035642 | orchestrator | Sunday 01 June 2025 22:55:52 +0000 (0:00:03.922) 0:00:35.770 *********** 2025-06-01 22:58:31.035669 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.035680 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:58:31.035691 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:58:31.035702 | orchestrator | 2025-06-01 22:58:31.035713 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-01 22:58:31.035724 | orchestrator | Sunday 01 June 2025 22:55:53 +0000 (0:00:01.048) 0:00:36.819 *********** 2025-06-01 22:58:31.035735 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:31.035745 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:58:31.035756 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:58:31.035767 | orchestrator | 2025-06-01 22:58:31.035778 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-01 22:58:31.035788 | orchestrator | Sunday 01 June 2025 22:55:53 +0000 (0:00:00.327) 0:00:37.146 *********** 2025-06-01 22:58:31.035799 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:31.035810 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:58:31.035820 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:58:31.035831 | orchestrator | 2025-06-01 22:58:31.035842 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-01 22:58:31.035853 | orchestrator | Sunday 01 June 2025 22:55:54 +0000 (0:00:00.341) 0:00:37.488 *********** 2025-06-01 22:58:31.035865 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-01 22:58:31.035876 | orchestrator | ...ignoring 2025-06-01 22:58:31.035887 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-01 22:58:31.035898 | orchestrator | ...ignoring 2025-06-01 22:58:31.035909 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-01 22:58:31.035920 | orchestrator | ...ignoring 2025-06-01 22:58:31.035931 | orchestrator | 2025-06-01 22:58:31.035942 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-01 22:58:31.035952 | orchestrator | Sunday 01 June 2025 22:56:05 +0000 (0:00:11.021) 0:00:48.510 *********** 2025-06-01 22:58:31.035963 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:31.035974 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:58:31.035985 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:58:31.035995 | orchestrator | 2025-06-01 22:58:31.036006 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-01 22:58:31.036017 | orchestrator | Sunday 01 June 2025 22:56:05 +0000 (0:00:00.676) 0:00:49.187 *********** 2025-06-01 22:58:31.036028 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:31.036045 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.036056 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.036067 | orchestrator | 2025-06-01 22:58:31.036078 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-01 22:58:31.036089 | orchestrator | Sunday 01 June 2025 22:56:06 +0000 (0:00:00.435) 0:00:49.622 *********** 2025-06-01 22:58:31.036100 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:31.036110 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.036121 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.036132 | orchestrator | 2025-06-01 22:58:31.036143 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-01 22:58:31.036153 | orchestrator | Sunday 01 June 2025 22:56:06 +0000 (0:00:00.410) 0:00:50.033 *********** 2025-06-01 22:58:31.036170 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:31.036181 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.036191 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.036202 | orchestrator | 2025-06-01 22:58:31.036213 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-01 22:58:31.036224 | orchestrator | Sunday 01 June 2025 22:56:07 +0000 (0:00:00.480) 0:00:50.513 *********** 2025-06-01 22:58:31.036235 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:31.036246 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:58:31.036257 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:58:31.036267 | orchestrator | 2025-06-01 22:58:31.036278 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-01 22:58:31.036289 | orchestrator | Sunday 01 June 2025 22:56:07 +0000 (0:00:00.635) 0:00:51.149 *********** 2025-06-01 22:58:31.036305 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:31.036317 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.036327 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.036338 | orchestrator | 2025-06-01 22:58:31.036349 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 22:58:31.036360 | orchestrator | Sunday 01 June 2025 22:56:08 +0000 (0:00:00.428) 0:00:51.578 *********** 2025-06-01 22:58:31.036370 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.036381 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.036392 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-01 22:58:31.036403 | orchestrator | 2025-06-01 22:58:31.036414 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-01 22:58:31.036425 | orchestrator | Sunday 01 June 2025 22:56:08 +0000 (0:00:00.376) 0:00:51.954 *********** 2025-06-01 22:58:31.036435 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.036446 | orchestrator | 2025-06-01 22:58:31.036457 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-01 22:58:31.036468 | orchestrator | Sunday 01 June 2025 22:56:18 +0000 (0:00:10.008) 0:01:01.963 *********** 2025-06-01 22:58:31.036478 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:31.036489 | orchestrator | 2025-06-01 22:58:31.036500 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 22:58:31.036511 | orchestrator | Sunday 01 June 2025 22:56:18 +0000 (0:00:00.140) 0:01:02.103 *********** 2025-06-01 22:58:31.036522 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:31.036532 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.036543 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.036554 | orchestrator | 2025-06-01 22:58:31.036564 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-01 22:58:31.036575 | orchestrator | Sunday 01 June 2025 22:56:19 +0000 (0:00:00.997) 0:01:03.100 *********** 2025-06-01 22:58:31.036586 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.036597 | orchestrator | 2025-06-01 22:58:31.036608 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-01 22:58:31.036618 | orchestrator | Sunday 01 June 2025 22:56:27 +0000 (0:00:07.742) 0:01:10.843 *********** 2025-06-01 22:58:31.036635 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:31.036646 | orchestrator | 2025-06-01 22:58:31.036690 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-01 22:58:31.036702 | orchestrator | Sunday 01 June 2025 22:56:29 +0000 (0:00:01.599) 0:01:12.443 *********** 2025-06-01 22:58:31.036712 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:31.036723 | orchestrator | 2025-06-01 22:58:31.036734 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-01 22:58:31.036745 | orchestrator | Sunday 01 June 2025 22:56:31 +0000 (0:00:02.537) 0:01:14.980 *********** 2025-06-01 22:58:31.036755 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.036766 | orchestrator | 2025-06-01 22:58:31.036777 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-01 22:58:31.036787 | orchestrator | Sunday 01 June 2025 22:56:31 +0000 (0:00:00.123) 0:01:15.104 *********** 2025-06-01 22:58:31.036798 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:31.036808 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.036819 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.036830 | orchestrator | 2025-06-01 22:58:31.036840 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-01 22:58:31.036851 | orchestrator | Sunday 01 June 2025 22:56:32 +0000 (0:00:00.514) 0:01:15.618 *********** 2025-06-01 22:58:31.036861 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:31.036872 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-01 22:58:31.036883 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:58:31.036893 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:58:31.036904 | orchestrator | 2025-06-01 22:58:31.036915 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-01 22:58:31.036925 | orchestrator | skipping: no hosts matched 2025-06-01 22:58:31.036936 | orchestrator | 2025-06-01 22:58:31.036946 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-01 22:58:31.036957 | orchestrator | 2025-06-01 22:58:31.036968 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-01 22:58:31.036978 | orchestrator | Sunday 01 June 2025 22:56:32 +0000 (0:00:00.347) 0:01:15.965 *********** 2025-06-01 22:58:31.036989 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:58:31.037000 | orchestrator | 2025-06-01 22:58:31.037011 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-01 22:58:31.037021 | orchestrator | Sunday 01 June 2025 22:56:51 +0000 (0:00:19.035) 0:01:35.001 *********** 2025-06-01 22:58:31.037032 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:58:31.037042 | orchestrator | 2025-06-01 22:58:31.037053 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-01 22:58:31.037064 | orchestrator | Sunday 01 June 2025 22:57:12 +0000 (0:00:20.663) 0:01:55.664 *********** 2025-06-01 22:58:31.037074 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:58:31.037085 | orchestrator | 2025-06-01 22:58:31.037095 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-01 22:58:31.037106 | orchestrator | 2025-06-01 22:58:31.037117 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-01 22:58:31.037134 | orchestrator | Sunday 01 June 2025 22:57:14 +0000 (0:00:02.519) 0:01:58.183 *********** 2025-06-01 22:58:31.037145 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:58:31.037156 | orchestrator | 2025-06-01 22:58:31.037166 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-01 22:58:31.037177 | orchestrator | Sunday 01 June 2025 22:57:35 +0000 (0:00:20.218) 0:02:18.401 *********** 2025-06-01 22:58:31.037188 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:58:31.037199 | orchestrator | 2025-06-01 22:58:31.037209 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-01 22:58:31.037220 | orchestrator | Sunday 01 June 2025 22:57:55 +0000 (0:00:20.577) 0:02:38.979 *********** 2025-06-01 22:58:31.037231 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:58:31.037242 | orchestrator | 2025-06-01 22:58:31.037259 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-01 22:58:31.037270 | orchestrator | 2025-06-01 22:58:31.037286 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-01 22:58:31.037297 | orchestrator | Sunday 01 June 2025 22:57:58 +0000 (0:00:02.864) 0:02:41.844 *********** 2025-06-01 22:58:31.037308 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.037318 | orchestrator | 2025-06-01 22:58:31.037329 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-01 22:58:31.037340 | orchestrator | Sunday 01 June 2025 22:58:10 +0000 (0:00:11.699) 0:02:53.543 *********** 2025-06-01 22:58:31.037350 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:31.037361 | orchestrator | 2025-06-01 22:58:31.037371 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-01 22:58:31.037382 | orchestrator | Sunday 01 June 2025 22:58:14 +0000 (0:00:04.646) 0:02:58.190 *********** 2025-06-01 22:58:31.037393 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:31.037403 | orchestrator | 2025-06-01 22:58:31.037414 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-01 22:58:31.037425 | orchestrator | 2025-06-01 22:58:31.037436 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-01 22:58:31.037446 | orchestrator | Sunday 01 June 2025 22:58:17 +0000 (0:00:02.511) 0:03:00.701 *********** 2025-06-01 22:58:31.037457 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:58:31.037468 | orchestrator | 2025-06-01 22:58:31.037479 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-01 22:58:31.037489 | orchestrator | Sunday 01 June 2025 22:58:17 +0000 (0:00:00.520) 0:03:01.221 *********** 2025-06-01 22:58:31.037500 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.037511 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.037521 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.037532 | orchestrator | 2025-06-01 22:58:31.037543 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-01 22:58:31.037554 | orchestrator | Sunday 01 June 2025 22:58:20 +0000 (0:00:02.361) 0:03:03.582 *********** 2025-06-01 22:58:31.037564 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.037575 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.037586 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.037597 | orchestrator | 2025-06-01 22:58:31.037607 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-01 22:58:31.037618 | orchestrator | Sunday 01 June 2025 22:58:22 +0000 (0:00:02.044) 0:03:05.626 *********** 2025-06-01 22:58:31.037628 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.037639 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.037650 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.037681 | orchestrator | 2025-06-01 22:58:31.037692 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-01 22:58:31.037703 | orchestrator | Sunday 01 June 2025 22:58:24 +0000 (0:00:02.121) 0:03:07.748 *********** 2025-06-01 22:58:31.037714 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.037725 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.037736 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:58:31.037746 | orchestrator | 2025-06-01 22:58:31.037757 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-01 22:58:31.037768 | orchestrator | Sunday 01 June 2025 22:58:26 +0000 (0:00:02.022) 0:03:09.770 *********** 2025-06-01 22:58:31.037778 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:58:31.037789 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:58:31.037800 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:58:31.037811 | orchestrator | 2025-06-01 22:58:31.037822 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-01 22:58:31.037832 | orchestrator | Sunday 01 June 2025 22:58:29 +0000 (0:00:03.048) 0:03:12.818 *********** 2025-06-01 22:58:31.037860 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:58:31.037878 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:58:31.037889 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:58:31.037900 | orchestrator | 2025-06-01 22:58:31.037910 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:58:31.037921 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-01 22:58:31.037933 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-01 22:58:31.037946 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-01 22:58:31.037957 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-01 22:58:31.037967 | orchestrator | 2025-06-01 22:58:31.037979 | orchestrator | 2025-06-01 22:58:31.037989 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:58:31.038000 | orchestrator | Sunday 01 June 2025 22:58:29 +0000 (0:00:00.219) 0:03:13.038 *********** 2025-06-01 22:58:31.038057 | orchestrator | =============================================================================== 2025-06-01 22:58:31.038072 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.24s 2025-06-01 22:58:31.038083 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 39.25s 2025-06-01 22:58:31.038094 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.70s 2025-06-01 22:58:31.038105 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.02s 2025-06-01 22:58:31.038116 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.01s 2025-06-01 22:58:31.038127 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.74s 2025-06-01 22:58:31.038144 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.38s 2025-06-01 22:58:31.038155 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.65s 2025-06-01 22:58:31.038166 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.30s 2025-06-01 22:58:31.038177 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.97s 2025-06-01 22:58:31.038188 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.92s 2025-06-01 22:58:31.038198 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.56s 2025-06-01 22:58:31.038209 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.41s 2025-06-01 22:58:31.038220 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.41s 2025-06-01 22:58:31.038231 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.11s 2025-06-01 22:58:31.038242 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.05s 2025-06-01 22:58:31.038253 | orchestrator | Check MariaDB service --------------------------------------------------- 2.85s 2025-06-01 22:58:31.038264 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.54s 2025-06-01 22:58:31.038275 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.51s 2025-06-01 22:58:31.038286 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.36s 2025-06-01 22:58:34.078861 | orchestrator | 2025-06-01 22:58:34 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:34.080098 | orchestrator | 2025-06-01 22:58:34 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:58:34.082277 | orchestrator | 2025-06-01 22:58:34 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:58:34.082410 | orchestrator | 2025-06-01 22:58:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:37.124951 | orchestrator | 2025-06-01 22:58:37 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:37.125432 | orchestrator | 2025-06-01 22:58:37 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:58:37.126731 | orchestrator | 2025-06-01 22:58:37 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:58:37.126939 | orchestrator | 2025-06-01 22:58:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:40.179641 | orchestrator | 2025-06-01 22:58:40 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:40.186954 | orchestrator | 2025-06-01 22:58:40 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:58:40.191285 | orchestrator | 2025-06-01 22:58:40 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:58:40.191864 | orchestrator | 2025-06-01 22:58:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:43.241053 | orchestrator | 2025-06-01 22:58:43 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:43.241169 | orchestrator | 2025-06-01 22:58:43 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:58:43.241768 | orchestrator | 2025-06-01 22:58:43 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:58:43.241794 | orchestrator | 2025-06-01 22:58:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:46.300408 | orchestrator | 2025-06-01 22:58:46 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:46.301356 | orchestrator | 2025-06-01 22:58:46 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:58:46.302649 | orchestrator | 2025-06-01 22:58:46 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:58:46.302717 | orchestrator | 2025-06-01 22:58:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:49.366206 | orchestrator | 2025-06-01 22:58:49 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:49.366335 | orchestrator | 2025-06-01 22:58:49 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:58:49.366351 | orchestrator | 2025-06-01 22:58:49 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:58:49.366364 | orchestrator | 2025-06-01 22:58:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:52.416342 | orchestrator | 2025-06-01 22:58:52 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:52.418357 | orchestrator | 2025-06-01 22:58:52 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:58:52.421752 | orchestrator | 2025-06-01 22:58:52 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:58:52.422244 | orchestrator | 2025-06-01 22:58:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:55.465461 | orchestrator | 2025-06-01 22:58:55 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:55.465626 | orchestrator | 2025-06-01 22:58:55 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:58:55.472876 | orchestrator | 2025-06-01 22:58:55 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:58:55.472919 | orchestrator | 2025-06-01 22:58:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:58:58.517437 | orchestrator | 2025-06-01 22:58:58 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:58:58.517817 | orchestrator | 2025-06-01 22:58:58 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:58:58.518622 | orchestrator | 2025-06-01 22:58:58 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:58:58.518690 | orchestrator | 2025-06-01 22:58:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:01.571527 | orchestrator | 2025-06-01 22:59:01 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:59:01.573506 | orchestrator | 2025-06-01 22:59:01 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:01.575934 | orchestrator | 2025-06-01 22:59:01 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:01.575968 | orchestrator | 2025-06-01 22:59:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:04.633638 | orchestrator | 2025-06-01 22:59:04 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:59:04.633797 | orchestrator | 2025-06-01 22:59:04 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:04.634931 | orchestrator | 2025-06-01 22:59:04 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:04.634957 | orchestrator | 2025-06-01 22:59:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:07.687936 | orchestrator | 2025-06-01 22:59:07 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state STARTED 2025-06-01 22:59:07.689285 | orchestrator | 2025-06-01 22:59:07 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:07.690891 | orchestrator | 2025-06-01 22:59:07 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:07.690912 | orchestrator | 2025-06-01 22:59:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:10.747341 | orchestrator | 2025-06-01 22:59:10 | INFO  | Task ca8f8cb2-49b9-44d4-8865-5056c4f30643 is in state SUCCESS 2025-06-01 22:59:10.748550 | orchestrator | 2025-06-01 22:59:10.748872 | orchestrator | 2025-06-01 22:59:10.748894 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-01 22:59:10.748906 | orchestrator | 2025-06-01 22:59:10.748917 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-01 22:59:10.748929 | orchestrator | Sunday 01 June 2025 22:57:02 +0000 (0:00:00.588) 0:00:00.588 *********** 2025-06-01 22:59:10.748940 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:59:10.748953 | orchestrator | 2025-06-01 22:59:10.748964 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-01 22:59:10.748975 | orchestrator | Sunday 01 June 2025 22:57:03 +0000 (0:00:00.599) 0:00:01.187 *********** 2025-06-01 22:59:10.748986 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.748998 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.749009 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.749020 | orchestrator | 2025-06-01 22:59:10.749031 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-01 22:59:10.749042 | orchestrator | Sunday 01 June 2025 22:57:03 +0000 (0:00:00.661) 0:00:01.849 *********** 2025-06-01 22:59:10.749052 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.749063 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.749074 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.749084 | orchestrator | 2025-06-01 22:59:10.749095 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-01 22:59:10.749106 | orchestrator | Sunday 01 June 2025 22:57:04 +0000 (0:00:00.284) 0:00:02.134 *********** 2025-06-01 22:59:10.749144 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.749156 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.749166 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.749177 | orchestrator | 2025-06-01 22:59:10.749188 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-01 22:59:10.749198 | orchestrator | Sunday 01 June 2025 22:57:04 +0000 (0:00:00.834) 0:00:02.969 *********** 2025-06-01 22:59:10.749209 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.749220 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.749230 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.749241 | orchestrator | 2025-06-01 22:59:10.749251 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-01 22:59:10.749278 | orchestrator | Sunday 01 June 2025 22:57:05 +0000 (0:00:00.305) 0:00:03.275 *********** 2025-06-01 22:59:10.749289 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.749300 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.749311 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.749321 | orchestrator | 2025-06-01 22:59:10.749332 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-01 22:59:10.749343 | orchestrator | Sunday 01 June 2025 22:57:05 +0000 (0:00:00.316) 0:00:03.591 *********** 2025-06-01 22:59:10.749354 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.749365 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.749375 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.749386 | orchestrator | 2025-06-01 22:59:10.749397 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-01 22:59:10.749408 | orchestrator | Sunday 01 June 2025 22:57:05 +0000 (0:00:00.299) 0:00:03.890 *********** 2025-06-01 22:59:10.749419 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.749431 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.749441 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.749452 | orchestrator | 2025-06-01 22:59:10.749463 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-01 22:59:10.749476 | orchestrator | Sunday 01 June 2025 22:57:06 +0000 (0:00:00.476) 0:00:04.367 *********** 2025-06-01 22:59:10.749488 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.749500 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.749512 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.749524 | orchestrator | 2025-06-01 22:59:10.749536 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-01 22:59:10.749548 | orchestrator | Sunday 01 June 2025 22:57:06 +0000 (0:00:00.290) 0:00:04.657 *********** 2025-06-01 22:59:10.749562 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 22:59:10.749574 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 22:59:10.749587 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 22:59:10.749599 | orchestrator | 2025-06-01 22:59:10.749612 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-01 22:59:10.749624 | orchestrator | Sunday 01 June 2025 22:57:07 +0000 (0:00:00.623) 0:00:05.281 *********** 2025-06-01 22:59:10.749637 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.749676 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.749689 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.749701 | orchestrator | 2025-06-01 22:59:10.749714 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-01 22:59:10.749726 | orchestrator | Sunday 01 June 2025 22:57:07 +0000 (0:00:00.451) 0:00:05.733 *********** 2025-06-01 22:59:10.749738 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 22:59:10.749751 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 22:59:10.749763 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 22:59:10.749776 | orchestrator | 2025-06-01 22:59:10.749797 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-01 22:59:10.749808 | orchestrator | Sunday 01 June 2025 22:57:09 +0000 (0:00:02.098) 0:00:07.831 *********** 2025-06-01 22:59:10.749819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-01 22:59:10.749830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-01 22:59:10.749841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-01 22:59:10.749851 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.749862 | orchestrator | 2025-06-01 22:59:10.749873 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-01 22:59:10.749896 | orchestrator | Sunday 01 June 2025 22:57:10 +0000 (0:00:00.414) 0:00:08.246 *********** 2025-06-01 22:59:10.749910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.749925 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.749936 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.749947 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.749958 | orchestrator | 2025-06-01 22:59:10.749969 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-01 22:59:10.749980 | orchestrator | Sunday 01 June 2025 22:57:10 +0000 (0:00:00.771) 0:00:09.017 *********** 2025-06-01 22:59:10.749993 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.750013 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.750078 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.750090 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.750101 | orchestrator | 2025-06-01 22:59:10.750112 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-01 22:59:10.750156 | orchestrator | Sunday 01 June 2025 22:57:11 +0000 (0:00:00.145) 0:00:09.162 *********** 2025-06-01 22:59:10.750170 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '93666e8ca21a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-01 22:57:08.341548', 'end': '2025-06-01 22:57:08.387481', 'delta': '0:00:00.045933', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['93666e8ca21a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-01 22:59:10.750194 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0b90b514015e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-01 22:57:09.087204', 'end': '2025-06-01 22:57:09.140545', 'delta': '0:00:00.053341', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0b90b514015e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-01 22:59:10.750216 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cb31e643391f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-01 22:57:09.634515', 'end': '2025-06-01 22:57:09.675016', 'delta': '0:00:00.040501', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cb31e643391f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-01 22:59:10.750228 | orchestrator | 2025-06-01 22:59:10.750239 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-01 22:59:10.750250 | orchestrator | Sunday 01 June 2025 22:57:11 +0000 (0:00:00.405) 0:00:09.568 *********** 2025-06-01 22:59:10.750261 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.750272 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.750283 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.750293 | orchestrator | 2025-06-01 22:59:10.750304 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-01 22:59:10.750315 | orchestrator | Sunday 01 June 2025 22:57:11 +0000 (0:00:00.450) 0:00:10.019 *********** 2025-06-01 22:59:10.750326 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-01 22:59:10.750337 | orchestrator | 2025-06-01 22:59:10.750348 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-01 22:59:10.750359 | orchestrator | Sunday 01 June 2025 22:57:14 +0000 (0:00:02.643) 0:00:12.662 *********** 2025-06-01 22:59:10.750369 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.750380 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.750391 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.750402 | orchestrator | 2025-06-01 22:59:10.750412 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-01 22:59:10.750429 | orchestrator | Sunday 01 June 2025 22:57:14 +0000 (0:00:00.356) 0:00:13.019 *********** 2025-06-01 22:59:10.750440 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.750451 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.750462 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.750473 | orchestrator | 2025-06-01 22:59:10.750484 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-01 22:59:10.750494 | orchestrator | Sunday 01 June 2025 22:57:15 +0000 (0:00:00.467) 0:00:13.486 *********** 2025-06-01 22:59:10.750505 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.750516 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.750527 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.750537 | orchestrator | 2025-06-01 22:59:10.750548 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-01 22:59:10.750566 | orchestrator | Sunday 01 June 2025 22:57:15 +0000 (0:00:00.494) 0:00:13.981 *********** 2025-06-01 22:59:10.750577 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.750588 | orchestrator | 2025-06-01 22:59:10.750599 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-01 22:59:10.750610 | orchestrator | Sunday 01 June 2025 22:57:16 +0000 (0:00:00.155) 0:00:14.136 *********** 2025-06-01 22:59:10.750620 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.750631 | orchestrator | 2025-06-01 22:59:10.750642 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-01 22:59:10.750688 | orchestrator | Sunday 01 June 2025 22:57:16 +0000 (0:00:00.234) 0:00:14.371 *********** 2025-06-01 22:59:10.750700 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.750710 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.750721 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.750732 | orchestrator | 2025-06-01 22:59:10.750743 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-01 22:59:10.750754 | orchestrator | Sunday 01 June 2025 22:57:16 +0000 (0:00:00.307) 0:00:14.679 *********** 2025-06-01 22:59:10.750764 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.750775 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.750786 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.750797 | orchestrator | 2025-06-01 22:59:10.750807 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-01 22:59:10.750818 | orchestrator | Sunday 01 June 2025 22:57:16 +0000 (0:00:00.332) 0:00:15.011 *********** 2025-06-01 22:59:10.750829 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.750840 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.750850 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.750861 | orchestrator | 2025-06-01 22:59:10.750872 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-01 22:59:10.750882 | orchestrator | Sunday 01 June 2025 22:57:17 +0000 (0:00:00.526) 0:00:15.538 *********** 2025-06-01 22:59:10.750893 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.750903 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.750914 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.750925 | orchestrator | 2025-06-01 22:59:10.750936 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-01 22:59:10.750946 | orchestrator | Sunday 01 June 2025 22:57:17 +0000 (0:00:00.306) 0:00:15.844 *********** 2025-06-01 22:59:10.750957 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.751078 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.751096 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.751107 | orchestrator | 2025-06-01 22:59:10.751118 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-01 22:59:10.751129 | orchestrator | Sunday 01 June 2025 22:57:18 +0000 (0:00:00.330) 0:00:16.175 *********** 2025-06-01 22:59:10.751139 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.751380 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.751397 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.751408 | orchestrator | 2025-06-01 22:59:10.751419 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-01 22:59:10.751458 | orchestrator | Sunday 01 June 2025 22:57:18 +0000 (0:00:00.335) 0:00:16.510 *********** 2025-06-01 22:59:10.751471 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.751482 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.751493 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.751504 | orchestrator | 2025-06-01 22:59:10.751514 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-01 22:59:10.751525 | orchestrator | Sunday 01 June 2025 22:57:19 +0000 (0:00:00.531) 0:00:17.042 *********** 2025-06-01 22:59:10.751537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--836f126b--3930--552c--8c28--37312a7074e3-osd--block--836f126b--3930--552c--8c28--37312a7074e3', 'dm-uuid-LVM-029Jp1Ec1ULGPT7VpQK8wuergGsAbmtCVfdLVCxb40tL0wN6DtrXRi9tfiPA9NoF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.751567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04cd8323--667e--5571--83c4--b35d38a67016-osd--block--04cd8323--667e--5571--83c4--b35d38a67016', 'dm-uuid-LVM-XlZok0vJhac7G4DhhcTcFFzSL9VflUk62og1cc2KuwGLzOFTDHfpzhcEqMoT7nvt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.751580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.751592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.751603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.751615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.751627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part1', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part14', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part15', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part16', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--836f126b--3930--552c--8c28--37312a7074e3-osd--block--836f126b--3930--552c--8c28--37312a7074e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Y2DTDu-OqzU-iwrS-q9VQ-sl0t-PCaj-8TQ9zT', 'scsi-0QEMU_QEMU_HARDDISK_e3d9d8cc-8358-4e9f-a548-9ae6b89fa066', 'scsi-SQEMU_QEMU_HARDDISK_e3d9d8cc-8358-4e9f-a548-9ae6b89fa066'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--656e26cc--5762--5518--9587--501a37b6e3ae-osd--block--656e26cc--5762--5518--9587--501a37b6e3ae', 'dm-uuid-LVM-OsQWKWmb2Eb93srMle6JZEP4p1SzdO066wdVT1A9olADd4xdWe6zSXcfyUaFrVfp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--04cd8323--667e--5571--83c4--b35d38a67016-osd--block--04cd8323--667e--5571--83c4--b35d38a67016'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KL2Xbh-0IGE-VrUs-08nz-BHDo-s58k-swmDrm', 'scsi-0QEMU_QEMU_HARDDISK_bed9961c-b7ee-4957-bf35-2fee53571a5a', 'scsi-SQEMU_QEMU_HARDDISK_bed9961c-b7ee-4957-bf35-2fee53571a5a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c-osd--block--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c', 'dm-uuid-LVM-WSo20NaMXnuIYmccxZZwFk2dZtNVqfMTwniI3oyA6ruR6ir5smlv2OXr4mCF7x5o'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d04dff8-74fe-4097-ace0-4c437e5e0f9f', 'scsi-SQEMU_QEMU_HARDDISK_6d04dff8-74fe-4097-ace0-4c437e5e0f9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752469 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.752513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part1', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part14', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part15', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part16', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--656e26cc--5762--5518--9587--501a37b6e3ae-osd--block--656e26cc--5762--5518--9587--501a37b6e3ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xfTFaW-LCOM-DyHP-0Emp-Ele7-RXEX-RflTVe', 'scsi-0QEMU_QEMU_HARDDISK_540779ba-6163-469a-a896-cda4c9a0c816', 'scsi-SQEMU_QEMU_HARDDISK_540779ba-6163-469a-a896-cda4c9a0c816'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83360607--213f--5c54--ae9b--aa580894d048-osd--block--83360607--213f--5c54--ae9b--aa580894d048', 'dm-uuid-LVM-WVOBDT6woFNABVp5fCTIXaegcR0xFT0LuT0F0TMrOmiMe1YaCQo6tWAWVY8SkAxd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c-osd--block--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tORD2o-QVrP-uu4G-yirs-jkTU-DRT3-VNJpfH', 'scsi-0QEMU_QEMU_HARDDISK_a15d8421-e56a-4621-aed8-2eaa8f026081', 'scsi-SQEMU_QEMU_HARDDISK_a15d8421-e56a-4621-aed8-2eaa8f026081'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c033fef4--2688--55e0--9ca7--53dbc156bc4e-osd--block--c033fef4--2688--55e0--9ca7--53dbc156bc4e', 'dm-uuid-LVM-y1Up2sjeVNIrqC866rNW7BXkmCbzvmVfu13dJbP5yR1qydF2fcMgzKn8BOWCDB7t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4', 'scsi-SQEMU_QEMU_HARDDISK_f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752804 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.752816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 22:59:10.752907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part1', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part14', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part15', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part16', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--83360607--213f--5c54--ae9b--aa580894d048-osd--block--83360607--213f--5c54--ae9b--aa580894d048'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vW2cIk-WvsG-wUkZ-mgQF-ppuo-9BmW-Squun7', 'scsi-0QEMU_QEMU_HARDDISK_8eb07f49-902f-451e-9ead-836ebd4b9d37', 'scsi-SQEMU_QEMU_HARDDISK_8eb07f49-902f-451e-9ead-836ebd4b9d37'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c033fef4--2688--55e0--9ca7--53dbc156bc4e-osd--block--c033fef4--2688--55e0--9ca7--53dbc156bc4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AdaWf5-eNop-BZtE-2lYp-DJXQ-9w6f-HpSC7v', 'scsi-0QEMU_QEMU_HARDDISK_44465191-0fa1-4c22-9234-5804ca50669c', 'scsi-SQEMU_QEMU_HARDDISK_44465191-0fa1-4c22-9234-5804ca50669c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a931087f-71b6-44f2-a559-c8deb4b3c146', 'scsi-SQEMU_QEMU_HARDDISK_a931087f-71b6-44f2-a559-c8deb4b3c146'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 22:59:10.752989 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.753001 | orchestrator | 2025-06-01 22:59:10.753014 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-01 22:59:10.753027 | orchestrator | Sunday 01 June 2025 22:57:19 +0000 (0:00:00.539) 0:00:17.582 *********** 2025-06-01 22:59:10.753236 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--836f126b--3930--552c--8c28--37312a7074e3-osd--block--836f126b--3930--552c--8c28--37312a7074e3', 'dm-uuid-LVM-029Jp1Ec1ULGPT7VpQK8wuergGsAbmtCVfdLVCxb40tL0wN6DtrXRi9tfiPA9NoF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04cd8323--667e--5571--83c4--b35d38a67016-osd--block--04cd8323--667e--5571--83c4--b35d38a67016', 'dm-uuid-LVM-XlZok0vJhac7G4DhhcTcFFzSL9VflUk62og1cc2KuwGLzOFTDHfpzhcEqMoT7nvt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753271 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753284 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753304 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753325 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753337 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753365 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753377 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--656e26cc--5762--5518--9587--501a37b6e3ae-osd--block--656e26cc--5762--5518--9587--501a37b6e3ae', 'dm-uuid-LVM-OsQWKWmb2Eb93srMle6JZEP4p1SzdO066wdVT1A9olADd4xdWe6zSXcfyUaFrVfp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753395 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753414 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c-osd--block--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c', 'dm-uuid-LVM-WSo20NaMXnuIYmccxZZwFk2dZtNVqfMTwniI3oyA6ruR6ir5smlv2OXr4mCF7x5o'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part1', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part14', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part15', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part16', 'scsi-SQEMU_QEMU_HARDDISK_658bfcf8-ebe2-4dc5-9176-cd4fbed88c65-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--836f126b--3930--552c--8c28--37312a7074e3-osd--block--836f126b--3930--552c--8c28--37312a7074e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Y2DTDu-OqzU-iwrS-q9VQ-sl0t-PCaj-8TQ9zT', 'scsi-0QEMU_QEMU_HARDDISK_e3d9d8cc-8358-4e9f-a548-9ae6b89fa066', 'scsi-SQEMU_QEMU_HARDDISK_e3d9d8cc-8358-4e9f-a548-9ae6b89fa066'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753487 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--04cd8323--667e--5571--83c4--b35d38a67016-osd--block--04cd8323--667e--5571--83c4--b35d38a67016'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KL2Xbh-0IGE-VrUs-08nz-BHDo-s58k-swmDrm', 'scsi-0QEMU_QEMU_HARDDISK_bed9961c-b7ee-4957-bf35-2fee53571a5a', 'scsi-SQEMU_QEMU_HARDDISK_bed9961c-b7ee-4957-bf35-2fee53571a5a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753528 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d04dff8-74fe-4097-ace0-4c437e5e0f9f', 'scsi-SQEMU_QEMU_HARDDISK_6d04dff8-74fe-4097-ace0-4c437e5e0f9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753562 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.753580 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753592 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753604 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753632 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753673 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83360607--213f--5c54--ae9b--aa580894d048-osd--block--83360607--213f--5c54--ae9b--aa580894d048', 'dm-uuid-LVM-WVOBDT6woFNABVp5fCTIXaegcR0xFT0LuT0F0TMrOmiMe1YaCQo6tWAWVY8SkAxd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753703 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part1', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part14', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part15', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part16', 'scsi-SQEMU_QEMU_HARDDISK_3516484c-810d-4999-9d3f-5a7b207baf66-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753717 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c033fef4--2688--55e0--9ca7--53dbc156bc4e-osd--block--c033fef4--2688--55e0--9ca7--53dbc156bc4e', 'dm-uuid-LVM-y1Up2sjeVNIrqC866rNW7BXkmCbzvmVfu13dJbP5yR1qydF2fcMgzKn8BOWCDB7t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753736 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--656e26cc--5762--5518--9587--501a37b6e3ae-osd--block--656e26cc--5762--5518--9587--501a37b6e3ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xfTFaW-LCOM-DyHP-0Emp-Ele7-RXEX-RflTVe', 'scsi-0QEMU_QEMU_HARDDISK_540779ba-6163-469a-a896-cda4c9a0c816', 'scsi-SQEMU_QEMU_HARDDISK_540779ba-6163-469a-a896-cda4c9a0c816'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753748 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753767 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c-osd--block--154be1eb--c9a2--50db--b9e4--8c9f064a0b1c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tORD2o-QVrP-uu4G-yirs-jkTU-DRT3-VNJpfH', 'scsi-0QEMU_QEMU_HARDDISK_a15d8421-e56a-4621-aed8-2eaa8f026081', 'scsi-SQEMU_QEMU_HARDDISK_a15d8421-e56a-4621-aed8-2eaa8f026081'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753785 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753797 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4', 'scsi-SQEMU_QEMU_HARDDISK_f76f8f5b-fbcd-4a13-87b7-7d8b29fb80c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753816 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753827 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753840 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.753861 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753874 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753887 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753905 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753925 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part1', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part14', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part15', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part16', 'scsi-SQEMU_QEMU_HARDDISK_9792ff73-3fa5-45fc-a415-ec3ce4efc22b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753970 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--83360607--213f--5c54--ae9b--aa580894d048-osd--block--83360607--213f--5c54--ae9b--aa580894d048'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vW2cIk-WvsG-wUkZ-mgQF-ppuo-9BmW-Squun7', 'scsi-0QEMU_QEMU_HARDDISK_8eb07f49-902f-451e-9ead-836ebd4b9d37', 'scsi-SQEMU_QEMU_HARDDISK_8eb07f49-902f-451e-9ead-836ebd4b9d37'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.753991 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c033fef4--2688--55e0--9ca7--53dbc156bc4e-osd--block--c033fef4--2688--55e0--9ca7--53dbc156bc4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AdaWf5-eNop-BZtE-2lYp-DJXQ-9w6f-HpSC7v', 'scsi-0QEMU_QEMU_HARDDISK_44465191-0fa1-4c22-9234-5804ca50669c', 'scsi-SQEMU_QEMU_HARDDISK_44465191-0fa1-4c22-9234-5804ca50669c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.754005 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a931087f-71b6-44f2-a559-c8deb4b3c146', 'scsi-SQEMU_QEMU_HARDDISK_a931087f-71b6-44f2-a559-c8deb4b3c146'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.754097 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 22:59:10.754114 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.754126 | orchestrator | 2025-06-01 22:59:10.754140 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-01 22:59:10.754153 | orchestrator | Sunday 01 June 2025 22:57:20 +0000 (0:00:00.581) 0:00:18.163 *********** 2025-06-01 22:59:10.754166 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.754179 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.754191 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.754202 | orchestrator | 2025-06-01 22:59:10.754213 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-01 22:59:10.754224 | orchestrator | Sunday 01 June 2025 22:57:20 +0000 (0:00:00.668) 0:00:18.832 *********** 2025-06-01 22:59:10.754249 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.754261 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.754272 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.754295 | orchestrator | 2025-06-01 22:59:10.754306 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-01 22:59:10.754317 | orchestrator | Sunday 01 June 2025 22:57:21 +0000 (0:00:00.500) 0:00:19.333 *********** 2025-06-01 22:59:10.754328 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.754339 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.754350 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.754361 | orchestrator | 2025-06-01 22:59:10.754372 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-01 22:59:10.754392 | orchestrator | Sunday 01 June 2025 22:57:21 +0000 (0:00:00.654) 0:00:19.987 *********** 2025-06-01 22:59:10.754404 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.754415 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.754426 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.754437 | orchestrator | 2025-06-01 22:59:10.754449 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-01 22:59:10.754466 | orchestrator | Sunday 01 June 2025 22:57:22 +0000 (0:00:00.309) 0:00:20.296 *********** 2025-06-01 22:59:10.754477 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.754488 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.754499 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.754511 | orchestrator | 2025-06-01 22:59:10.754522 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-01 22:59:10.754533 | orchestrator | Sunday 01 June 2025 22:57:22 +0000 (0:00:00.412) 0:00:20.709 *********** 2025-06-01 22:59:10.754544 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.754555 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.754566 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.754577 | orchestrator | 2025-06-01 22:59:10.754588 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-01 22:59:10.754599 | orchestrator | Sunday 01 June 2025 22:57:23 +0000 (0:00:00.537) 0:00:21.247 *********** 2025-06-01 22:59:10.754610 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-01 22:59:10.754622 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-01 22:59:10.754633 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-01 22:59:10.754644 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-01 22:59:10.754673 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-01 22:59:10.754684 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-01 22:59:10.754695 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-01 22:59:10.754706 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-01 22:59:10.754716 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-01 22:59:10.754727 | orchestrator | 2025-06-01 22:59:10.754738 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-01 22:59:10.754749 | orchestrator | Sunday 01 June 2025 22:57:24 +0000 (0:00:00.889) 0:00:22.137 *********** 2025-06-01 22:59:10.754761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-01 22:59:10.754772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-01 22:59:10.754783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-01 22:59:10.754794 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.754805 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-01 22:59:10.754816 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-01 22:59:10.754827 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-01 22:59:10.754838 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.754849 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-01 22:59:10.754860 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-01 22:59:10.754871 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-01 22:59:10.754882 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.754893 | orchestrator | 2025-06-01 22:59:10.754904 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-01 22:59:10.754915 | orchestrator | Sunday 01 June 2025 22:57:24 +0000 (0:00:00.349) 0:00:22.486 *********** 2025-06-01 22:59:10.754928 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:59:10.754940 | orchestrator | 2025-06-01 22:59:10.754951 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-01 22:59:10.754971 | orchestrator | Sunday 01 June 2025 22:57:25 +0000 (0:00:00.701) 0:00:23.188 *********** 2025-06-01 22:59:10.754983 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.754994 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.755005 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.755016 | orchestrator | 2025-06-01 22:59:10.755034 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-01 22:59:10.755045 | orchestrator | Sunday 01 June 2025 22:57:25 +0000 (0:00:00.354) 0:00:23.542 *********** 2025-06-01 22:59:10.755056 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.755067 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.755079 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.755089 | orchestrator | 2025-06-01 22:59:10.755101 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-01 22:59:10.755112 | orchestrator | Sunday 01 June 2025 22:57:25 +0000 (0:00:00.347) 0:00:23.890 *********** 2025-06-01 22:59:10.755122 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.755134 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.755145 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:59:10.755155 | orchestrator | 2025-06-01 22:59:10.755166 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-01 22:59:10.755177 | orchestrator | Sunday 01 June 2025 22:57:26 +0000 (0:00:00.461) 0:00:24.351 *********** 2025-06-01 22:59:10.755188 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.755199 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.755210 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.755221 | orchestrator | 2025-06-01 22:59:10.755232 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-01 22:59:10.755243 | orchestrator | Sunday 01 June 2025 22:57:27 +0000 (0:00:00.743) 0:00:25.095 *********** 2025-06-01 22:59:10.755254 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:59:10.755265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:59:10.755276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:59:10.755286 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.755297 | orchestrator | 2025-06-01 22:59:10.755308 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-01 22:59:10.755319 | orchestrator | Sunday 01 June 2025 22:57:27 +0000 (0:00:00.378) 0:00:25.473 *********** 2025-06-01 22:59:10.755330 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:59:10.755342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:59:10.755358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:59:10.755369 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.755380 | orchestrator | 2025-06-01 22:59:10.755391 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-01 22:59:10.755402 | orchestrator | Sunday 01 June 2025 22:57:27 +0000 (0:00:00.379) 0:00:25.852 *********** 2025-06-01 22:59:10.755413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:59:10.755424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:59:10.755435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:59:10.755445 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.755456 | orchestrator | 2025-06-01 22:59:10.755467 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-01 22:59:10.755478 | orchestrator | Sunday 01 June 2025 22:57:28 +0000 (0:00:00.347) 0:00:26.200 *********** 2025-06-01 22:59:10.755633 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:59:10.755703 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:59:10.755716 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:59:10.755727 | orchestrator | 2025-06-01 22:59:10.755738 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-01 22:59:10.755766 | orchestrator | Sunday 01 June 2025 22:57:28 +0000 (0:00:00.329) 0:00:26.530 *********** 2025-06-01 22:59:10.755778 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-01 22:59:10.755789 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-01 22:59:10.755800 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-01 22:59:10.755810 | orchestrator | 2025-06-01 22:59:10.755821 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-01 22:59:10.755832 | orchestrator | Sunday 01 June 2025 22:57:29 +0000 (0:00:00.541) 0:00:27.071 *********** 2025-06-01 22:59:10.755843 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 22:59:10.755854 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 22:59:10.755865 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 22:59:10.755876 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-01 22:59:10.755887 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-01 22:59:10.755898 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-01 22:59:10.755909 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-01 22:59:10.755920 | orchestrator | 2025-06-01 22:59:10.755931 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-01 22:59:10.755942 | orchestrator | Sunday 01 June 2025 22:57:30 +0000 (0:00:00.955) 0:00:28.027 *********** 2025-06-01 22:59:10.755953 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 22:59:10.755964 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 22:59:10.755975 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 22:59:10.755985 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-01 22:59:10.755996 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-01 22:59:10.756007 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-01 22:59:10.756018 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-01 22:59:10.756029 | orchestrator | 2025-06-01 22:59:10.756050 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-01 22:59:10.756062 | orchestrator | Sunday 01 June 2025 22:57:31 +0000 (0:00:01.949) 0:00:29.976 *********** 2025-06-01 22:59:10.756073 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:59:10.756084 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:59:10.756095 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-01 22:59:10.756106 | orchestrator | 2025-06-01 22:59:10.756117 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-01 22:59:10.756128 | orchestrator | Sunday 01 June 2025 22:57:32 +0000 (0:00:00.387) 0:00:30.363 *********** 2025-06-01 22:59:10.756140 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 22:59:10.756152 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 22:59:10.756163 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 22:59:10.756188 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 22:59:10.756200 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 22:59:10.756211 | orchestrator | 2025-06-01 22:59:10.756222 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-01 22:59:10.756233 | orchestrator | Sunday 01 June 2025 22:58:17 +0000 (0:00:44.993) 0:01:15.357 *********** 2025-06-01 22:59:10.756244 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756255 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756267 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756280 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756292 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756305 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756317 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-01 22:59:10.756330 | orchestrator | 2025-06-01 22:59:10.756342 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-01 22:59:10.756354 | orchestrator | Sunday 01 June 2025 22:58:40 +0000 (0:00:23.237) 0:01:38.594 *********** 2025-06-01 22:59:10.756367 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756379 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756391 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756404 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756417 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756429 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756442 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 22:59:10.756454 | orchestrator | 2025-06-01 22:59:10.756467 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-01 22:59:10.756479 | orchestrator | Sunday 01 June 2025 22:58:52 +0000 (0:00:11.501) 0:01:50.096 *********** 2025-06-01 22:59:10.756492 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756505 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 22:59:10.756517 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 22:59:10.756530 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756542 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 22:59:10.756555 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 22:59:10.756574 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756586 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 22:59:10.756597 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 22:59:10.756607 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756627 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 22:59:10.756638 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 22:59:10.756668 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756679 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 22:59:10.756690 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 22:59:10.756701 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 22:59:10.756712 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 22:59:10.756722 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 22:59:10.756733 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-01 22:59:10.756744 | orchestrator | 2025-06-01 22:59:10.756755 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:59:10.756766 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-01 22:59:10.756780 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-01 22:59:10.756796 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-01 22:59:10.756807 | orchestrator | 2025-06-01 22:59:10.756818 | orchestrator | 2025-06-01 22:59:10.756829 | orchestrator | 2025-06-01 22:59:10.756840 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:59:10.756851 | orchestrator | Sunday 01 June 2025 22:59:09 +0000 (0:00:17.214) 0:02:07.311 *********** 2025-06-01 22:59:10.756862 | orchestrator | =============================================================================== 2025-06-01 22:59:10.756872 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.99s 2025-06-01 22:59:10.756883 | orchestrator | generate keys ---------------------------------------------------------- 23.24s 2025-06-01 22:59:10.756894 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.21s 2025-06-01 22:59:10.756905 | orchestrator | get keys from monitors ------------------------------------------------- 11.50s 2025-06-01 22:59:10.756916 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.64s 2025-06-01 22:59:10.756927 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.10s 2025-06-01 22:59:10.756937 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.95s 2025-06-01 22:59:10.756948 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.96s 2025-06-01 22:59:10.756959 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.89s 2025-06-01 22:59:10.756969 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.83s 2025-06-01 22:59:10.756980 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2025-06-01 22:59:10.756991 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.74s 2025-06-01 22:59:10.757002 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2025-06-01 22:59:10.757012 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.67s 2025-06-01 22:59:10.757023 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2025-06-01 22:59:10.757033 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2025-06-01 22:59:10.757044 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.62s 2025-06-01 22:59:10.757055 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.60s 2025-06-01 22:59:10.757072 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2025-06-01 22:59:10.757083 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.54s 2025-06-01 22:59:10.757094 | orchestrator | 2025-06-01 22:59:10 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:10.757105 | orchestrator | 2025-06-01 22:59:10 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:10.757116 | orchestrator | 2025-06-01 22:59:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:13.812485 | orchestrator | 2025-06-01 22:59:13 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:13.812905 | orchestrator | 2025-06-01 22:59:13 | INFO  | Task 4024db27-dbce-4d5a-9e50-3ea5aa28e600 is in state STARTED 2025-06-01 22:59:13.814247 | orchestrator | 2025-06-01 22:59:13 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:13.814272 | orchestrator | 2025-06-01 22:59:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:16.867203 | orchestrator | 2025-06-01 22:59:16 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:16.867811 | orchestrator | 2025-06-01 22:59:16 | INFO  | Task 4024db27-dbce-4d5a-9e50-3ea5aa28e600 is in state STARTED 2025-06-01 22:59:16.868613 | orchestrator | 2025-06-01 22:59:16 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:16.868636 | orchestrator | 2025-06-01 22:59:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:19.942574 | orchestrator | 2025-06-01 22:59:19 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:19.944081 | orchestrator | 2025-06-01 22:59:19 | INFO  | Task 4024db27-dbce-4d5a-9e50-3ea5aa28e600 is in state STARTED 2025-06-01 22:59:19.945913 | orchestrator | 2025-06-01 22:59:19 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:19.945939 | orchestrator | 2025-06-01 22:59:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:22.999126 | orchestrator | 2025-06-01 22:59:22 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:22.999268 | orchestrator | 2025-06-01 22:59:22 | INFO  | Task 4024db27-dbce-4d5a-9e50-3ea5aa28e600 is in state STARTED 2025-06-01 22:59:23.002155 | orchestrator | 2025-06-01 22:59:23 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:23.002274 | orchestrator | 2025-06-01 22:59:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:26.072049 | orchestrator | 2025-06-01 22:59:26 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:26.072161 | orchestrator | 2025-06-01 22:59:26 | INFO  | Task 4024db27-dbce-4d5a-9e50-3ea5aa28e600 is in state STARTED 2025-06-01 22:59:26.073366 | orchestrator | 2025-06-01 22:59:26 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:26.073401 | orchestrator | 2025-06-01 22:59:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:29.135773 | orchestrator | 2025-06-01 22:59:29 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:29.136164 | orchestrator | 2025-06-01 22:59:29 | INFO  | Task 4024db27-dbce-4d5a-9e50-3ea5aa28e600 is in state STARTED 2025-06-01 22:59:29.138067 | orchestrator | 2025-06-01 22:59:29 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:29.138161 | orchestrator | 2025-06-01 22:59:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:32.202599 | orchestrator | 2025-06-01 22:59:32 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:32.204761 | orchestrator | 2025-06-01 22:59:32 | INFO  | Task 4024db27-dbce-4d5a-9e50-3ea5aa28e600 is in state STARTED 2025-06-01 22:59:32.206387 | orchestrator | 2025-06-01 22:59:32 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:32.206634 | orchestrator | 2025-06-01 22:59:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:35.264084 | orchestrator | 2025-06-01 22:59:35 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:35.266316 | orchestrator | 2025-06-01 22:59:35 | INFO  | Task 4024db27-dbce-4d5a-9e50-3ea5aa28e600 is in state STARTED 2025-06-01 22:59:35.268319 | orchestrator | 2025-06-01 22:59:35 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:35.268344 | orchestrator | 2025-06-01 22:59:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:38.314778 | orchestrator | 2025-06-01 22:59:38 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:38.315539 | orchestrator | 2025-06-01 22:59:38 | INFO  | Task 4024db27-dbce-4d5a-9e50-3ea5aa28e600 is in state STARTED 2025-06-01 22:59:38.317506 | orchestrator | 2025-06-01 22:59:38 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:38.317536 | orchestrator | 2025-06-01 22:59:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:41.367859 | orchestrator | 2025-06-01 22:59:41 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:41.367962 | orchestrator | 2025-06-01 22:59:41 | INFO  | Task 4024db27-dbce-4d5a-9e50-3ea5aa28e600 is in state SUCCESS 2025-06-01 22:59:41.370803 | orchestrator | 2025-06-01 22:59:41 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:41.371030 | orchestrator | 2025-06-01 22:59:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:44.426984 | orchestrator | 2025-06-01 22:59:44 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:44.427215 | orchestrator | 2025-06-01 22:59:44 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 22:59:44.429291 | orchestrator | 2025-06-01 22:59:44 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:44.429327 | orchestrator | 2025-06-01 22:59:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:47.483216 | orchestrator | 2025-06-01 22:59:47 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:47.485730 | orchestrator | 2025-06-01 22:59:47 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 22:59:47.488683 | orchestrator | 2025-06-01 22:59:47 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:47.488717 | orchestrator | 2025-06-01 22:59:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:50.549584 | orchestrator | 2025-06-01 22:59:50 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:50.549716 | orchestrator | 2025-06-01 22:59:50 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 22:59:50.550571 | orchestrator | 2025-06-01 22:59:50 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:50.552301 | orchestrator | 2025-06-01 22:59:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:53.607354 | orchestrator | 2025-06-01 22:59:53 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:53.608144 | orchestrator | 2025-06-01 22:59:53 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 22:59:53.608616 | orchestrator | 2025-06-01 22:59:53 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:53.608640 | orchestrator | 2025-06-01 22:59:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:56.652805 | orchestrator | 2025-06-01 22:59:56 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:56.653843 | orchestrator | 2025-06-01 22:59:56 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 22:59:56.654119 | orchestrator | 2025-06-01 22:59:56 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:56.654144 | orchestrator | 2025-06-01 22:59:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 22:59:59.700433 | orchestrator | 2025-06-01 22:59:59 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 22:59:59.702509 | orchestrator | 2025-06-01 22:59:59 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 22:59:59.704764 | orchestrator | 2025-06-01 22:59:59 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 22:59:59.704778 | orchestrator | 2025-06-01 22:59:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:02.765945 | orchestrator | 2025-06-01 23:00:02 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 23:00:02.768553 | orchestrator | 2025-06-01 23:00:02 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:02.771398 | orchestrator | 2025-06-01 23:00:02 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:02.771429 | orchestrator | 2025-06-01 23:00:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:05.818175 | orchestrator | 2025-06-01 23:00:05 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 23:00:05.819052 | orchestrator | 2025-06-01 23:00:05 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:05.820407 | orchestrator | 2025-06-01 23:00:05 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:05.820430 | orchestrator | 2025-06-01 23:00:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:08.874850 | orchestrator | 2025-06-01 23:00:08 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 23:00:08.876506 | orchestrator | 2025-06-01 23:00:08 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:08.877879 | orchestrator | 2025-06-01 23:00:08 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:08.877903 | orchestrator | 2025-06-01 23:00:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:11.937941 | orchestrator | 2025-06-01 23:00:11 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 23:00:11.938469 | orchestrator | 2025-06-01 23:00:11 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:11.940093 | orchestrator | 2025-06-01 23:00:11 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:11.940120 | orchestrator | 2025-06-01 23:00:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:14.979463 | orchestrator | 2025-06-01 23:00:14 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 23:00:14.980390 | orchestrator | 2025-06-01 23:00:14 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:14.981439 | orchestrator | 2025-06-01 23:00:14 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:14.981467 | orchestrator | 2025-06-01 23:00:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:18.034631 | orchestrator | 2025-06-01 23:00:18 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 23:00:18.037256 | orchestrator | 2025-06-01 23:00:18 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:18.040259 | orchestrator | 2025-06-01 23:00:18 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:18.040295 | orchestrator | 2025-06-01 23:00:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:21.095166 | orchestrator | 2025-06-01 23:00:21 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state STARTED 2025-06-01 23:00:21.095834 | orchestrator | 2025-06-01 23:00:21 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:21.097145 | orchestrator | 2025-06-01 23:00:21 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:21.097177 | orchestrator | 2025-06-01 23:00:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:24.142408 | orchestrator | 2025-06-01 23:00:24 | INFO  | Task bfd42202-f306-4a78-8314-8a8e2122efa6 is in state SUCCESS 2025-06-01 23:00:24.144311 | orchestrator | 2025-06-01 23:00:24.144360 | orchestrator | 2025-06-01 23:00:24.144374 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-01 23:00:24.144387 | orchestrator | 2025-06-01 23:00:24.144398 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-01 23:00:24.144410 | orchestrator | Sunday 01 June 2025 22:59:13 +0000 (0:00:00.154) 0:00:00.154 *********** 2025-06-01 23:00:24.144421 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-01 23:00:24.144434 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-01 23:00:24.144445 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-01 23:00:24.144455 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 23:00:24.144466 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-01 23:00:24.144477 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-01 23:00:24.144488 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-01 23:00:24.144499 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-01 23:00:24.144509 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-01 23:00:24.144520 | orchestrator | 2025-06-01 23:00:24.144531 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-01 23:00:24.144542 | orchestrator | Sunday 01 June 2025 22:59:18 +0000 (0:00:04.231) 0:00:04.386 *********** 2025-06-01 23:00:24.144554 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 23:00:24.144565 | orchestrator | 2025-06-01 23:00:24.144576 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-01 23:00:24.144587 | orchestrator | Sunday 01 June 2025 22:59:19 +0000 (0:00:01.068) 0:00:05.454 *********** 2025-06-01 23:00:24.144598 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-01 23:00:24.144609 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-01 23:00:24.144678 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-01 23:00:24.144692 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 23:00:24.144702 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-01 23:00:24.144713 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-01 23:00:24.144724 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-01 23:00:24.144735 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-01 23:00:24.144823 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-01 23:00:24.144839 | orchestrator | 2025-06-01 23:00:24.144850 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-01 23:00:24.144861 | orchestrator | Sunday 01 June 2025 22:59:33 +0000 (0:00:14.386) 0:00:19.840 *********** 2025-06-01 23:00:24.144872 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-01 23:00:24.144883 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-01 23:00:24.144894 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-01 23:00:24.144905 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 23:00:24.144916 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-01 23:00:24.144927 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-01 23:00:24.144938 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-01 23:00:24.144949 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-01 23:00:24.144960 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-01 23:00:24.144970 | orchestrator | 2025-06-01 23:00:24.144981 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:00:24.144993 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:00:24.145005 | orchestrator | 2025-06-01 23:00:24.145016 | orchestrator | 2025-06-01 23:00:24.145027 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:00:24.145053 | orchestrator | Sunday 01 June 2025 22:59:40 +0000 (0:00:07.045) 0:00:26.886 *********** 2025-06-01 23:00:24.145064 | orchestrator | =============================================================================== 2025-06-01 23:00:24.145075 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.39s 2025-06-01 23:00:24.145086 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.05s 2025-06-01 23:00:24.145097 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.23s 2025-06-01 23:00:24.145107 | orchestrator | Create share directory -------------------------------------------------- 1.07s 2025-06-01 23:00:24.145120 | orchestrator | 2025-06-01 23:00:24.145139 | orchestrator | 2025-06-01 23:00:24.145156 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:00:24.145174 | orchestrator | 2025-06-01 23:00:24.145207 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:00:24.145227 | orchestrator | Sunday 01 June 2025 22:58:34 +0000 (0:00:00.269) 0:00:00.269 *********** 2025-06-01 23:00:24.145243 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:00:24.145255 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:00:24.145266 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:00:24.145276 | orchestrator | 2025-06-01 23:00:24.145287 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:00:24.145298 | orchestrator | Sunday 01 June 2025 22:58:34 +0000 (0:00:00.302) 0:00:00.571 *********** 2025-06-01 23:00:24.145308 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-01 23:00:24.145331 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-01 23:00:24.145342 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-01 23:00:24.145353 | orchestrator | 2025-06-01 23:00:24.145363 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-01 23:00:24.145375 | orchestrator | 2025-06-01 23:00:24.145386 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 23:00:24.145397 | orchestrator | Sunday 01 June 2025 22:58:34 +0000 (0:00:00.406) 0:00:00.977 *********** 2025-06-01 23:00:24.145408 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:00:24.145419 | orchestrator | 2025-06-01 23:00:24.145429 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-01 23:00:24.145440 | orchestrator | Sunday 01 June 2025 22:58:35 +0000 (0:00:00.542) 0:00:01.520 *********** 2025-06-01 23:00:24.145457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:00:24.145497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:00:24.145522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:00:24.145536 | orchestrator | 2025-06-01 23:00:24.145554 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-01 23:00:24.145566 | orchestrator | Sunday 01 June 2025 22:58:36 +0000 (0:00:01.065) 0:00:02.586 *********** 2025-06-01 23:00:24.145578 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:00:24.145591 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:00:24.145603 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:00:24.145616 | orchestrator | 2025-06-01 23:00:24.145629 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 23:00:24.145641 | orchestrator | Sunday 01 June 2025 22:58:36 +0000 (0:00:00.457) 0:00:03.044 *********** 2025-06-01 23:00:24.145693 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-01 23:00:24.145714 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-01 23:00:24.145728 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-01 23:00:24.145741 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-01 23:00:24.145753 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-01 23:00:24.145765 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-01 23:00:24.145777 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-01 23:00:24.145789 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-01 23:00:24.145802 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-01 23:00:24.145814 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-01 23:00:24.145824 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-01 23:00:24.145835 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-01 23:00:24.145846 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-01 23:00:24.145857 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-01 23:00:24.145867 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-01 23:00:24.145878 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-01 23:00:24.145889 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-01 23:00:24.145900 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-01 23:00:24.145911 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-01 23:00:24.145921 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-01 23:00:24.145932 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-01 23:00:24.145943 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-01 23:00:24.145954 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-01 23:00:24.145965 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-01 23:00:24.145977 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-01 23:00:24.145989 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-01 23:00:24.146000 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-01 23:00:24.146011 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-01 23:00:24.146082 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-01 23:00:24.146093 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-01 23:00:24.146104 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-01 23:00:24.146122 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-01 23:00:24.146133 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-01 23:00:24.146151 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-01 23:00:24.146217 | orchestrator | 2025-06-01 23:00:24.146241 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:00:24.146261 | orchestrator | Sunday 01 June 2025 22:58:37 +0000 (0:00:00.708) 0:00:03.752 *********** 2025-06-01 23:00:24.146279 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:00:24.146295 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:00:24.146306 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:00:24.146317 | orchestrator | 2025-06-01 23:00:24.146327 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:00:24.146338 | orchestrator | Sunday 01 June 2025 22:58:38 +0000 (0:00:00.329) 0:00:04.082 *********** 2025-06-01 23:00:24.146349 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.146360 | orchestrator | 2025-06-01 23:00:24.146379 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:00:24.146391 | orchestrator | Sunday 01 June 2025 22:58:38 +0000 (0:00:00.135) 0:00:04.217 *********** 2025-06-01 23:00:24.146401 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.146413 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.146423 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.146434 | orchestrator | 2025-06-01 23:00:24.146445 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:00:24.146455 | orchestrator | Sunday 01 June 2025 22:58:38 +0000 (0:00:00.514) 0:00:04.732 *********** 2025-06-01 23:00:24.146466 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:00:24.146477 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:00:24.146487 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:00:24.146498 | orchestrator | 2025-06-01 23:00:24.146508 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:00:24.146519 | orchestrator | Sunday 01 June 2025 22:58:38 +0000 (0:00:00.311) 0:00:05.043 *********** 2025-06-01 23:00:24.146530 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.146541 | orchestrator | 2025-06-01 23:00:24.146551 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:00:24.146562 | orchestrator | Sunday 01 June 2025 22:58:39 +0000 (0:00:00.130) 0:00:05.174 *********** 2025-06-01 23:00:24.146572 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.146583 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.146594 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.146605 | orchestrator | 2025-06-01 23:00:24.146616 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:00:24.146627 | orchestrator | Sunday 01 June 2025 22:58:39 +0000 (0:00:00.259) 0:00:05.433 *********** 2025-06-01 23:00:24.146638 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:00:24.146924 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:00:24.147006 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:00:24.147021 | orchestrator | 2025-06-01 23:00:24.147036 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:00:24.147050 | orchestrator | Sunday 01 June 2025 22:58:39 +0000 (0:00:00.275) 0:00:05.709 *********** 2025-06-01 23:00:24.147061 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.147074 | orchestrator | 2025-06-01 23:00:24.147086 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:00:24.147097 | orchestrator | Sunday 01 June 2025 22:58:39 +0000 (0:00:00.338) 0:00:06.048 *********** 2025-06-01 23:00:24.147107 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.147152 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.147164 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.147175 | orchestrator | 2025-06-01 23:00:24.147186 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:00:24.147197 | orchestrator | Sunday 01 June 2025 22:58:40 +0000 (0:00:00.318) 0:00:06.366 *********** 2025-06-01 23:00:24.147208 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:00:24.147218 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:00:24.147229 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:00:24.147240 | orchestrator | 2025-06-01 23:00:24.147251 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:00:24.147262 | orchestrator | Sunday 01 June 2025 22:58:40 +0000 (0:00:00.329) 0:00:06.695 *********** 2025-06-01 23:00:24.147273 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.147284 | orchestrator | 2025-06-01 23:00:24.147295 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:00:24.147306 | orchestrator | Sunday 01 June 2025 22:58:40 +0000 (0:00:00.125) 0:00:06.821 *********** 2025-06-01 23:00:24.147317 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.147328 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.147339 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.147349 | orchestrator | 2025-06-01 23:00:24.147360 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:00:24.147371 | orchestrator | Sunday 01 June 2025 22:58:41 +0000 (0:00:00.286) 0:00:07.107 *********** 2025-06-01 23:00:24.147381 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:00:24.147392 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:00:24.147403 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:00:24.147413 | orchestrator | 2025-06-01 23:00:24.147424 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:00:24.147435 | orchestrator | Sunday 01 June 2025 22:58:41 +0000 (0:00:00.539) 0:00:07.647 *********** 2025-06-01 23:00:24.147446 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.147456 | orchestrator | 2025-06-01 23:00:24.147467 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:00:24.147478 | orchestrator | Sunday 01 June 2025 22:58:41 +0000 (0:00:00.134) 0:00:07.782 *********** 2025-06-01 23:00:24.147488 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.147499 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.147510 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.147521 | orchestrator | 2025-06-01 23:00:24.147531 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:00:24.147542 | orchestrator | Sunday 01 June 2025 22:58:42 +0000 (0:00:00.380) 0:00:08.162 *********** 2025-06-01 23:00:24.147553 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:00:24.147564 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:00:24.147575 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:00:24.147586 | orchestrator | 2025-06-01 23:00:24.147597 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:00:24.147628 | orchestrator | Sunday 01 June 2025 22:58:42 +0000 (0:00:00.314) 0:00:08.476 *********** 2025-06-01 23:00:24.147639 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.147680 | orchestrator | 2025-06-01 23:00:24.147691 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:00:24.147702 | orchestrator | Sunday 01 June 2025 22:58:42 +0000 (0:00:00.142) 0:00:08.618 *********** 2025-06-01 23:00:24.147713 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.147724 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.147735 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.147745 | orchestrator | 2025-06-01 23:00:24.147756 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:00:24.147767 | orchestrator | Sunday 01 June 2025 22:58:43 +0000 (0:00:00.474) 0:00:09.093 *********** 2025-06-01 23:00:24.147777 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:00:24.147837 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:00:24.147849 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:00:24.147860 | orchestrator | 2025-06-01 23:00:24.147871 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:00:24.147882 | orchestrator | Sunday 01 June 2025 22:58:43 +0000 (0:00:00.329) 0:00:09.422 *********** 2025-06-01 23:00:24.147893 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.147903 | orchestrator | 2025-06-01 23:00:24.147914 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:00:24.147925 | orchestrator | Sunday 01 June 2025 22:58:43 +0000 (0:00:00.130) 0:00:09.553 *********** 2025-06-01 23:00:24.147936 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.147947 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.147957 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.147968 | orchestrator | 2025-06-01 23:00:24.147979 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:00:24.147989 | orchestrator | Sunday 01 June 2025 22:58:43 +0000 (0:00:00.272) 0:00:09.825 *********** 2025-06-01 23:00:24.148001 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:00:24.148012 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:00:24.148023 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:00:24.148033 | orchestrator | 2025-06-01 23:00:24.148044 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:00:24.148055 | orchestrator | Sunday 01 June 2025 22:58:44 +0000 (0:00:00.327) 0:00:10.152 *********** 2025-06-01 23:00:24.148066 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.148076 | orchestrator | 2025-06-01 23:00:24.148087 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:00:24.148098 | orchestrator | Sunday 01 June 2025 22:58:44 +0000 (0:00:00.127) 0:00:10.280 *********** 2025-06-01 23:00:24.148109 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.148119 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.148130 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.148141 | orchestrator | 2025-06-01 23:00:24.148151 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:00:24.148162 | orchestrator | Sunday 01 June 2025 22:58:44 +0000 (0:00:00.498) 0:00:10.779 *********** 2025-06-01 23:00:24.148173 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:00:24.148183 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:00:24.148194 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:00:24.148205 | orchestrator | 2025-06-01 23:00:24.148216 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:00:24.148227 | orchestrator | Sunday 01 June 2025 22:58:45 +0000 (0:00:00.340) 0:00:11.119 *********** 2025-06-01 23:00:24.148237 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.148248 | orchestrator | 2025-06-01 23:00:24.148259 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:00:24.148269 | orchestrator | Sunday 01 June 2025 22:58:45 +0000 (0:00:00.124) 0:00:11.243 *********** 2025-06-01 23:00:24.148280 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.148291 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.148302 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.148312 | orchestrator | 2025-06-01 23:00:24.148323 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:00:24.148334 | orchestrator | Sunday 01 June 2025 22:58:45 +0000 (0:00:00.285) 0:00:11.529 *********** 2025-06-01 23:00:24.148345 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:00:24.148356 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:00:24.148366 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:00:24.148377 | orchestrator | 2025-06-01 23:00:24.148388 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:00:24.148398 | orchestrator | Sunday 01 June 2025 22:58:46 +0000 (0:00:00.665) 0:00:12.195 *********** 2025-06-01 23:00:24.148409 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.148420 | orchestrator | 2025-06-01 23:00:24.148431 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:00:24.148449 | orchestrator | Sunday 01 June 2025 22:58:46 +0000 (0:00:00.154) 0:00:12.349 *********** 2025-06-01 23:00:24.148460 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.148471 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.148482 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.148492 | orchestrator | 2025-06-01 23:00:24.148503 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-01 23:00:24.148514 | orchestrator | Sunday 01 June 2025 22:58:46 +0000 (0:00:00.354) 0:00:12.704 *********** 2025-06-01 23:00:24.148537 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:00:24.148548 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:00:24.148558 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:00:24.148569 | orchestrator | 2025-06-01 23:00:24.148580 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-01 23:00:24.148591 | orchestrator | Sunday 01 June 2025 22:58:48 +0000 (0:00:01.585) 0:00:14.290 *********** 2025-06-01 23:00:24.148602 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-01 23:00:24.148614 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-01 23:00:24.148624 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-01 23:00:24.148635 | orchestrator | 2025-06-01 23:00:24.148675 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-01 23:00:24.148687 | orchestrator | Sunday 01 June 2025 22:58:50 +0000 (0:00:02.330) 0:00:16.620 *********** 2025-06-01 23:00:24.148698 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-01 23:00:24.148711 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-01 23:00:24.148722 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-01 23:00:24.148732 | orchestrator | 2025-06-01 23:00:24.148743 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-01 23:00:24.148762 | orchestrator | Sunday 01 June 2025 22:58:52 +0000 (0:00:02.217) 0:00:18.837 *********** 2025-06-01 23:00:24.148774 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-01 23:00:24.148785 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-01 23:00:24.148795 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-01 23:00:24.148806 | orchestrator | 2025-06-01 23:00:24.148817 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-01 23:00:24.148827 | orchestrator | Sunday 01 June 2025 22:58:54 +0000 (0:00:01.492) 0:00:20.330 *********** 2025-06-01 23:00:24.148838 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.148849 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.148860 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.148870 | orchestrator | 2025-06-01 23:00:24.148881 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-01 23:00:24.148892 | orchestrator | Sunday 01 June 2025 22:58:54 +0000 (0:00:00.318) 0:00:20.649 *********** 2025-06-01 23:00:24.148902 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.148913 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.148923 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.148934 | orchestrator | 2025-06-01 23:00:24.148945 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 23:00:24.148956 | orchestrator | Sunday 01 June 2025 22:58:54 +0000 (0:00:00.310) 0:00:20.960 *********** 2025-06-01 23:00:24.148966 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:00:24.148985 | orchestrator | 2025-06-01 23:00:24.148996 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-01 23:00:24.149007 | orchestrator | Sunday 01 June 2025 22:58:55 +0000 (0:00:00.961) 0:00:21.921 *********** 2025-06-01 23:00:24.149024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:00:24.149059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:00:24.149087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:00:24.149100 | orchestrator | 2025-06-01 23:00:24.149112 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-01 23:00:24.149123 | orchestrator | Sunday 01 June 2025 22:58:57 +0000 (0:00:01.633) 0:00:23.555 *********** 2025-06-01 23:00:24.149146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:00:24.149166 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.149190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:00:24.149204 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.149216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:00:24.149235 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.149246 | orchestrator | 2025-06-01 23:00:24.149257 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-01 23:00:24.149268 | orchestrator | Sunday 01 June 2025 22:58:58 +0000 (0:00:00.648) 0:00:24.203 *********** 2025-06-01 23:00:24.149295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:00:24.149308 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.149327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:00:24.149354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:00:24.149380 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.149391 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.149402 | orchestrator | 2025-06-01 23:00:24.149413 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-01 23:00:24.149424 | orchestrator | Sunday 01 June 2025 22:58:59 +0000 (0:00:01.111) 0:00:25.315 *********** 2025-06-01 23:00:24.149437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:00:24.149464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:00:24.149490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:00:24.149502 | orchestrator | 2025-06-01 23:00:24.149513 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 23:00:24.149524 | orchestrator | Sunday 01 June 2025 22:59:00 +0000 (0:00:01.610) 0:00:26.926 *********** 2025-06-01 23:00:24.149535 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:00:24.149546 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:00:24.149557 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:00:24.149568 | orchestrator | 2025-06-01 23:00:24.149579 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 23:00:24.149590 | orchestrator | Sunday 01 June 2025 22:59:01 +0000 (0:00:00.300) 0:00:27.226 *********** 2025-06-01 23:00:24.149608 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:00:24.149630 | orchestrator | 2025-06-01 23:00:24.149641 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-01 23:00:24.149695 | orchestrator | Sunday 01 June 2025 22:59:01 +0000 (0:00:00.782) 0:00:28.008 *********** 2025-06-01 23:00:24.149706 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:00:24.149717 | orchestrator | 2025-06-01 23:00:24.149728 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-01 23:00:24.149739 | orchestrator | Sunday 01 June 2025 22:59:04 +0000 (0:00:02.163) 0:00:30.172 *********** 2025-06-01 23:00:24.149750 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:00:24.149761 | orchestrator | 2025-06-01 23:00:24.149772 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-01 23:00:24.149783 | orchestrator | Sunday 01 June 2025 22:59:06 +0000 (0:00:02.122) 0:00:32.295 *********** 2025-06-01 23:00:24.149794 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:00:24.149805 | orchestrator | 2025-06-01 23:00:24.149816 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-01 23:00:24.149826 | orchestrator | Sunday 01 June 2025 22:59:21 +0000 (0:00:14.942) 0:00:47.237 *********** 2025-06-01 23:00:24.149837 | orchestrator | 2025-06-01 23:00:24.149848 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-01 23:00:24.149859 | orchestrator | Sunday 01 June 2025 22:59:21 +0000 (0:00:00.068) 0:00:47.306 *********** 2025-06-01 23:00:24.149870 | orchestrator | 2025-06-01 23:00:24.149881 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-01 23:00:24.149891 | orchestrator | Sunday 01 June 2025 22:59:21 +0000 (0:00:00.066) 0:00:47.372 *********** 2025-06-01 23:00:24.149902 | orchestrator | 2025-06-01 23:00:24.149913 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-01 23:00:24.149923 | orchestrator | Sunday 01 June 2025 22:59:21 +0000 (0:00:00.065) 0:00:47.438 *********** 2025-06-01 23:00:24.149934 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:00:24.149945 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:00:24.149956 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:00:24.149967 | orchestrator | 2025-06-01 23:00:24.149978 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:00:24.149989 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-01 23:00:24.150002 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-01 23:00:24.150062 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-01 23:00:24.150076 | orchestrator | 2025-06-01 23:00:24.150087 | orchestrator | 2025-06-01 23:00:24.150098 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:00:24.150109 | orchestrator | Sunday 01 June 2025 23:00:20 +0000 (0:00:59.411) 0:01:46.850 *********** 2025-06-01 23:00:24.150120 | orchestrator | =============================================================================== 2025-06-01 23:00:24.150131 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.41s 2025-06-01 23:00:24.150142 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.94s 2025-06-01 23:00:24.150153 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.33s 2025-06-01 23:00:24.150164 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.22s 2025-06-01 23:00:24.150175 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.16s 2025-06-01 23:00:24.150186 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.12s 2025-06-01 23:00:24.150197 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.63s 2025-06-01 23:00:24.150208 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.61s 2025-06-01 23:00:24.150227 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.59s 2025-06-01 23:00:24.150238 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.49s 2025-06-01 23:00:24.150250 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.11s 2025-06-01 23:00:24.150261 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.07s 2025-06-01 23:00:24.150271 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.96s 2025-06-01 23:00:24.150282 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2025-06-01 23:00:24.150293 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2025-06-01 23:00:24.150304 | orchestrator | horizon : Update policy file name --------------------------------------- 0.67s 2025-06-01 23:00:24.150321 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2025-06-01 23:00:24.150333 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-06-01 23:00:24.150344 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2025-06-01 23:00:24.150355 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-06-01 23:00:24.150366 | orchestrator | 2025-06-01 23:00:24 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:24.150377 | orchestrator | 2025-06-01 23:00:24 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:24.150396 | orchestrator | 2025-06-01 23:00:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:27.198982 | orchestrator | 2025-06-01 23:00:27 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:27.199123 | orchestrator | 2025-06-01 23:00:27 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:27.199152 | orchestrator | 2025-06-01 23:00:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:30.246451 | orchestrator | 2025-06-01 23:00:30 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:30.247958 | orchestrator | 2025-06-01 23:00:30 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:30.247992 | orchestrator | 2025-06-01 23:00:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:33.292027 | orchestrator | 2025-06-01 23:00:33 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:33.293625 | orchestrator | 2025-06-01 23:00:33 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:33.293789 | orchestrator | 2025-06-01 23:00:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:36.428726 | orchestrator | 2025-06-01 23:00:36 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:36.429448 | orchestrator | 2025-06-01 23:00:36 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:36.429477 | orchestrator | 2025-06-01 23:00:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:39.474125 | orchestrator | 2025-06-01 23:00:39 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state STARTED 2025-06-01 23:00:39.474788 | orchestrator | 2025-06-01 23:00:39 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:39.474891 | orchestrator | 2025-06-01 23:00:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:42.530788 | orchestrator | 2025-06-01 23:00:42 | INFO  | Task a4f51e32-ae52-44c8-903b-70d4567147db is in state SUCCESS 2025-06-01 23:00:42.530928 | orchestrator | 2025-06-01 23:00:42 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:42.530982 | orchestrator | 2025-06-01 23:00:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:45.601549 | orchestrator | 2025-06-01 23:00:45 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:00:45.603604 | orchestrator | 2025-06-01 23:00:45 | INFO  | Task 5b37678a-62f3-4782-be0a-910bf712992f is in state STARTED 2025-06-01 23:00:45.605907 | orchestrator | 2025-06-01 23:00:45 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:00:45.607948 | orchestrator | 2025-06-01 23:00:45 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:45.607992 | orchestrator | 2025-06-01 23:00:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:48.641893 | orchestrator | 2025-06-01 23:00:48 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:00:48.642438 | orchestrator | 2025-06-01 23:00:48 | INFO  | Task 5b37678a-62f3-4782-be0a-910bf712992f is in state SUCCESS 2025-06-01 23:00:48.642823 | orchestrator | 2025-06-01 23:00:48 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:00:48.643911 | orchestrator | 2025-06-01 23:00:48 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:48.643935 | orchestrator | 2025-06-01 23:00:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:51.689222 | orchestrator | 2025-06-01 23:00:51 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:00:51.689361 | orchestrator | 2025-06-01 23:00:51 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:00:51.689403 | orchestrator | 2025-06-01 23:00:51 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state STARTED 2025-06-01 23:00:51.690064 | orchestrator | 2025-06-01 23:00:51 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:00:51.690797 | orchestrator | 2025-06-01 23:00:51 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:51.691168 | orchestrator | 2025-06-01 23:00:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:54.723731 | orchestrator | 2025-06-01 23:00:54 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:00:54.723875 | orchestrator | 2025-06-01 23:00:54 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:00:54.724045 | orchestrator | 2025-06-01 23:00:54 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state STARTED 2025-06-01 23:00:54.724667 | orchestrator | 2025-06-01 23:00:54 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:00:54.725243 | orchestrator | 2025-06-01 23:00:54 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:54.725265 | orchestrator | 2025-06-01 23:00:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:00:57.767073 | orchestrator | 2025-06-01 23:00:57 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:00:57.768579 | orchestrator | 2025-06-01 23:00:57 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:00:57.770888 | orchestrator | 2025-06-01 23:00:57 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state STARTED 2025-06-01 23:00:57.771388 | orchestrator | 2025-06-01 23:00:57 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:00:57.772288 | orchestrator | 2025-06-01 23:00:57 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:00:57.772356 | orchestrator | 2025-06-01 23:00:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:00.815134 | orchestrator | 2025-06-01 23:01:00 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:00.815754 | orchestrator | 2025-06-01 23:01:00 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:00.816899 | orchestrator | 2025-06-01 23:01:00 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state STARTED 2025-06-01 23:01:00.818738 | orchestrator | 2025-06-01 23:01:00 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:00.820556 | orchestrator | 2025-06-01 23:01:00 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:01:00.820574 | orchestrator | 2025-06-01 23:01:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:03.877457 | orchestrator | 2025-06-01 23:01:03 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:03.880733 | orchestrator | 2025-06-01 23:01:03 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:03.883520 | orchestrator | 2025-06-01 23:01:03 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state STARTED 2025-06-01 23:01:03.885340 | orchestrator | 2025-06-01 23:01:03 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:03.887760 | orchestrator | 2025-06-01 23:01:03 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:01:03.887791 | orchestrator | 2025-06-01 23:01:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:06.939712 | orchestrator | 2025-06-01 23:01:06 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:06.941226 | orchestrator | 2025-06-01 23:01:06 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:06.945275 | orchestrator | 2025-06-01 23:01:06 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state STARTED 2025-06-01 23:01:06.945368 | orchestrator | 2025-06-01 23:01:06 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:06.945999 | orchestrator | 2025-06-01 23:01:06 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:01:06.946072 | orchestrator | 2025-06-01 23:01:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:10.002956 | orchestrator | 2025-06-01 23:01:10 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:10.003699 | orchestrator | 2025-06-01 23:01:10 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:10.004946 | orchestrator | 2025-06-01 23:01:10 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state STARTED 2025-06-01 23:01:10.006615 | orchestrator | 2025-06-01 23:01:10 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:10.008366 | orchestrator | 2025-06-01 23:01:10 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:01:10.008400 | orchestrator | 2025-06-01 23:01:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:13.078404 | orchestrator | 2025-06-01 23:01:13 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:13.078543 | orchestrator | 2025-06-01 23:01:13 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:13.079787 | orchestrator | 2025-06-01 23:01:13 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state STARTED 2025-06-01 23:01:13.080848 | orchestrator | 2025-06-01 23:01:13 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:13.082773 | orchestrator | 2025-06-01 23:01:13 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:01:13.082813 | orchestrator | 2025-06-01 23:01:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:16.146142 | orchestrator | 2025-06-01 23:01:16 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:16.146274 | orchestrator | 2025-06-01 23:01:16 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:16.150503 | orchestrator | 2025-06-01 23:01:16 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state STARTED 2025-06-01 23:01:16.153178 | orchestrator | 2025-06-01 23:01:16 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:16.156772 | orchestrator | 2025-06-01 23:01:16 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:01:16.156795 | orchestrator | 2025-06-01 23:01:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:19.205017 | orchestrator | 2025-06-01 23:01:19 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:19.205155 | orchestrator | 2025-06-01 23:01:19 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:19.205170 | orchestrator | 2025-06-01 23:01:19 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state STARTED 2025-06-01 23:01:19.205182 | orchestrator | 2025-06-01 23:01:19 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:19.205194 | orchestrator | 2025-06-01 23:01:19 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state STARTED 2025-06-01 23:01:19.205205 | orchestrator | 2025-06-01 23:01:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:22.261794 | orchestrator | 2025-06-01 23:01:22 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:22.262828 | orchestrator | 2025-06-01 23:01:22 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:22.263355 | orchestrator | 2025-06-01 23:01:22 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state STARTED 2025-06-01 23:01:22.265030 | orchestrator | 2025-06-01 23:01:22 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:22.268885 | orchestrator | 2025-06-01 23:01:22 | INFO  | Task 3549e66d-2d06-4d1b-8fd6-214d25245b66 is in state SUCCESS 2025-06-01 23:01:22.269107 | orchestrator | 2025-06-01 23:01:22.269176 | orchestrator | 2025-06-01 23:01:22.269190 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-01 23:01:22.269198 | orchestrator | 2025-06-01 23:01:22.269205 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-01 23:01:22.269212 | orchestrator | Sunday 01 June 2025 22:59:45 +0000 (0:00:00.234) 0:00:00.234 *********** 2025-06-01 23:01:22.269220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-01 23:01:22.269229 | orchestrator | 2025-06-01 23:01:22.269237 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-01 23:01:22.269243 | orchestrator | Sunday 01 June 2025 22:59:45 +0000 (0:00:00.196) 0:00:00.431 *********** 2025-06-01 23:01:22.269251 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-01 23:01:22.269259 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-01 23:01:22.269268 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-01 23:01:22.269275 | orchestrator | 2025-06-01 23:01:22.269282 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-01 23:01:22.269318 | orchestrator | Sunday 01 June 2025 22:59:46 +0000 (0:00:01.298) 0:00:01.729 *********** 2025-06-01 23:01:22.269343 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-01 23:01:22.269350 | orchestrator | 2025-06-01 23:01:22.269357 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-01 23:01:22.269363 | orchestrator | Sunday 01 June 2025 22:59:47 +0000 (0:00:01.194) 0:00:02.924 *********** 2025-06-01 23:01:22.269370 | orchestrator | changed: [testbed-manager] 2025-06-01 23:01:22.269377 | orchestrator | 2025-06-01 23:01:22.269384 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-01 23:01:22.269391 | orchestrator | Sunday 01 June 2025 22:59:48 +0000 (0:00:00.990) 0:00:03.914 *********** 2025-06-01 23:01:22.269397 | orchestrator | changed: [testbed-manager] 2025-06-01 23:01:22.269458 | orchestrator | 2025-06-01 23:01:22.269467 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-01 23:01:22.269473 | orchestrator | Sunday 01 June 2025 22:59:49 +0000 (0:00:00.895) 0:00:04.810 *********** 2025-06-01 23:01:22.269480 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-01 23:01:22.269486 | orchestrator | ok: [testbed-manager] 2025-06-01 23:01:22.269493 | orchestrator | 2025-06-01 23:01:22.269499 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-01 23:01:22.269506 | orchestrator | Sunday 01 June 2025 23:00:32 +0000 (0:00:42.492) 0:00:47.302 *********** 2025-06-01 23:01:22.269513 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-01 23:01:22.269520 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-01 23:01:22.269526 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-01 23:01:22.269532 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-01 23:01:22.269539 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-01 23:01:22.269545 | orchestrator | 2025-06-01 23:01:22.269552 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-01 23:01:22.269559 | orchestrator | Sunday 01 June 2025 23:00:36 +0000 (0:00:04.222) 0:00:51.525 *********** 2025-06-01 23:01:22.269565 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-01 23:01:22.269572 | orchestrator | 2025-06-01 23:01:22.269579 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-01 23:01:22.269585 | orchestrator | Sunday 01 June 2025 23:00:36 +0000 (0:00:00.448) 0:00:51.973 *********** 2025-06-01 23:01:22.269592 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:01:22.269598 | orchestrator | 2025-06-01 23:01:22.269604 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-01 23:01:22.269611 | orchestrator | Sunday 01 June 2025 23:00:37 +0000 (0:00:00.117) 0:00:52.090 *********** 2025-06-01 23:01:22.269618 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:01:22.269624 | orchestrator | 2025-06-01 23:01:22.269631 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-01 23:01:22.269637 | orchestrator | Sunday 01 June 2025 23:00:37 +0000 (0:00:00.279) 0:00:52.370 *********** 2025-06-01 23:01:22.269788 | orchestrator | changed: [testbed-manager] 2025-06-01 23:01:22.269800 | orchestrator | 2025-06-01 23:01:22.269807 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-01 23:01:22.269814 | orchestrator | Sunday 01 June 2025 23:00:38 +0000 (0:00:01.512) 0:00:53.883 *********** 2025-06-01 23:01:22.269821 | orchestrator | changed: [testbed-manager] 2025-06-01 23:01:22.269829 | orchestrator | 2025-06-01 23:01:22.269914 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-01 23:01:22.269920 | orchestrator | Sunday 01 June 2025 23:00:39 +0000 (0:00:00.926) 0:00:54.809 *********** 2025-06-01 23:01:22.269926 | orchestrator | changed: [testbed-manager] 2025-06-01 23:01:22.269932 | orchestrator | 2025-06-01 23:01:22.269938 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-01 23:01:22.269957 | orchestrator | Sunday 01 June 2025 23:00:40 +0000 (0:00:00.603) 0:00:55.413 *********** 2025-06-01 23:01:22.269965 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-01 23:01:22.269971 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-01 23:01:22.269978 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-01 23:01:22.269984 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-01 23:01:22.269990 | orchestrator | 2025-06-01 23:01:22.269997 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:01:22.270004 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:01:22.270051 | orchestrator | 2025-06-01 23:01:22.270060 | orchestrator | 2025-06-01 23:01:22.270080 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:01:22.270087 | orchestrator | Sunday 01 June 2025 23:00:41 +0000 (0:00:01.456) 0:00:56.869 *********** 2025-06-01 23:01:22.270094 | orchestrator | =============================================================================== 2025-06-01 23:01:22.270101 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.49s 2025-06-01 23:01:22.270108 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.22s 2025-06-01 23:01:22.270114 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.51s 2025-06-01 23:01:22.270120 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.46s 2025-06-01 23:01:22.270126 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.30s 2025-06-01 23:01:22.270132 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.19s 2025-06-01 23:01:22.270139 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.99s 2025-06-01 23:01:22.270146 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.93s 2025-06-01 23:01:22.270153 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.90s 2025-06-01 23:01:22.270167 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2025-06-01 23:01:22.270174 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-06-01 23:01:22.270181 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.28s 2025-06-01 23:01:22.270187 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2025-06-01 23:01:22.270194 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-06-01 23:01:22.270201 | orchestrator | 2025-06-01 23:01:22.270207 | orchestrator | 2025-06-01 23:01:22.270214 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:01:22.270221 | orchestrator | 2025-06-01 23:01:22.270227 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:01:22.270233 | orchestrator | Sunday 01 June 2025 23:00:46 +0000 (0:00:00.192) 0:00:00.192 *********** 2025-06-01 23:01:22.270240 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:01:22.270247 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:01:22.270253 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:01:22.270260 | orchestrator | 2025-06-01 23:01:22.270267 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:01:22.270273 | orchestrator | Sunday 01 June 2025 23:00:46 +0000 (0:00:00.345) 0:00:00.538 *********** 2025-06-01 23:01:22.270280 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-01 23:01:22.270287 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-01 23:01:22.270294 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-01 23:01:22.270301 | orchestrator | 2025-06-01 23:01:22.270308 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-01 23:01:22.270315 | orchestrator | 2025-06-01 23:01:22.270321 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-01 23:01:22.270337 | orchestrator | Sunday 01 June 2025 23:00:47 +0000 (0:00:00.821) 0:00:01.359 *********** 2025-06-01 23:01:22.270345 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:01:22.270351 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:01:22.270358 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:01:22.270365 | orchestrator | 2025-06-01 23:01:22.270372 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:01:22.270380 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:01:22.270388 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:01:22.270395 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:01:22.270402 | orchestrator | 2025-06-01 23:01:22.270409 | orchestrator | 2025-06-01 23:01:22.270416 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:01:22.270423 | orchestrator | Sunday 01 June 2025 23:00:48 +0000 (0:00:00.725) 0:00:02.085 *********** 2025-06-01 23:01:22.270430 | orchestrator | =============================================================================== 2025-06-01 23:01:22.270437 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2025-06-01 23:01:22.270443 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.73s 2025-06-01 23:01:22.270450 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-06-01 23:01:22.270457 | orchestrator | 2025-06-01 23:01:22.271036 | orchestrator | 2025-06-01 23:01:22.271061 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:01:22.271068 | orchestrator | 2025-06-01 23:01:22.271074 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:01:22.271080 | orchestrator | Sunday 01 June 2025 22:58:34 +0000 (0:00:00.259) 0:00:00.259 *********** 2025-06-01 23:01:22.271086 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:01:22.271092 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:01:22.271098 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:01:22.271104 | orchestrator | 2025-06-01 23:01:22.271111 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:01:22.271117 | orchestrator | Sunday 01 June 2025 22:58:34 +0000 (0:00:00.291) 0:00:00.550 *********** 2025-06-01 23:01:22.271124 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-01 23:01:22.271131 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-01 23:01:22.271137 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-01 23:01:22.271144 | orchestrator | 2025-06-01 23:01:22.271151 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-01 23:01:22.271157 | orchestrator | 2025-06-01 23:01:22.271164 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 23:01:22.271170 | orchestrator | Sunday 01 June 2025 22:58:34 +0000 (0:00:00.425) 0:00:00.976 *********** 2025-06-01 23:01:22.271177 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:01:22.271184 | orchestrator | 2025-06-01 23:01:22.271191 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-01 23:01:22.271197 | orchestrator | Sunday 01 June 2025 22:58:35 +0000 (0:00:00.539) 0:00:01.515 *********** 2025-06-01 23:01:22.271217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.271237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.271275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.271284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271335 | orchestrator | 2025-06-01 23:01:22.271342 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-01 23:01:22.271352 | orchestrator | Sunday 01 June 2025 22:58:37 +0000 (0:00:01.725) 0:00:03.241 *********** 2025-06-01 23:01:22.271359 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-01 23:01:22.271365 | orchestrator | 2025-06-01 23:01:22.271370 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-01 23:01:22.271376 | orchestrator | Sunday 01 June 2025 22:58:37 +0000 (0:00:00.842) 0:00:04.083 *********** 2025-06-01 23:01:22.271382 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:01:22.271389 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:01:22.271394 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:01:22.271401 | orchestrator | 2025-06-01 23:01:22.271406 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-01 23:01:22.271412 | orchestrator | Sunday 01 June 2025 22:58:38 +0000 (0:00:00.480) 0:00:04.564 *********** 2025-06-01 23:01:22.271418 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:01:22.271424 | orchestrator | 2025-06-01 23:01:22.271431 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 23:01:22.271436 | orchestrator | Sunday 01 June 2025 22:58:39 +0000 (0:00:00.712) 0:00:05.277 *********** 2025-06-01 23:01:22.271442 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:01:22.271453 | orchestrator | 2025-06-01 23:01:22.271459 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-01 23:01:22.271465 | orchestrator | Sunday 01 June 2025 22:58:39 +0000 (0:00:00.502) 0:00:05.780 *********** 2025-06-01 23:01:22.271475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.271482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.271495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.271503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271548 | orchestrator | 2025-06-01 23:01:22.271554 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-01 23:01:22.271560 | orchestrator | Sunday 01 June 2025 22:58:42 +0000 (0:00:03.414) 0:00:09.195 *********** 2025-06-01 23:01:22.271569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:01:22.271580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:01:22.271593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:01:22.271598 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.271605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:01:22.271610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:01:22.271623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:01:22.271634 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:01:22.271666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:01:22.271676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:01:22.271682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:01:22.271688 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:01:22.271694 | orchestrator | 2025-06-01 23:01:22.271699 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-01 23:01:22.271705 | orchestrator | Sunday 01 June 2025 22:58:43 +0000 (0:00:00.558) 0:00:09.753 *********** 2025-06-01 23:01:22.271711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:01:22.271721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:01:22.271731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:01:22.271737 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.271746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:01:22.271752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:01:22.271758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:01:22.271764 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:01:22.271776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:01:22.271786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:01:22.271793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:01:22.271798 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:01:22.271804 | orchestrator | 2025-06-01 23:01:22.271813 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-01 23:01:22.271818 | orchestrator | Sunday 01 June 2025 22:58:44 +0000 (0:00:00.715) 0:00:10.468 *********** 2025-06-01 23:01:22.271824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.271830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.271845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.271852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.271986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272009 | orchestrator | 2025-06-01 23:01:22.272016 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-01 23:01:22.272022 | orchestrator | Sunday 01 June 2025 22:58:47 +0000 (0:00:03.463) 0:00:13.932 *********** 2025-06-01 23:01:22.272032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.272039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:01:22.272046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.272058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:01:22.272070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.272077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:01:22.272086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272112 | orchestrator | 2025-06-01 23:01:22.272119 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-01 23:01:22.272125 | orchestrator | Sunday 01 June 2025 22:58:53 +0000 (0:00:05.771) 0:00:19.703 *********** 2025-06-01 23:01:22.272131 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:01:22.272136 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:01:22.272142 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:01:22.272148 | orchestrator | 2025-06-01 23:01:22.272153 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-01 23:01:22.272159 | orchestrator | Sunday 01 June 2025 22:58:54 +0000 (0:00:01.410) 0:00:21.114 *********** 2025-06-01 23:01:22.272164 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.272170 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:01:22.272176 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:01:22.272181 | orchestrator | 2025-06-01 23:01:22.272190 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-01 23:01:22.272195 | orchestrator | Sunday 01 June 2025 22:58:55 +0000 (0:00:00.634) 0:00:21.748 *********** 2025-06-01 23:01:22.272201 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.272207 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:01:22.272213 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:01:22.272218 | orchestrator | 2025-06-01 23:01:22.272224 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-01 23:01:22.272229 | orchestrator | Sunday 01 June 2025 22:58:56 +0000 (0:00:00.660) 0:00:22.409 *********** 2025-06-01 23:01:22.272235 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.272240 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:01:22.272246 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:01:22.272251 | orchestrator | 2025-06-01 23:01:22.272256 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-01 23:01:22.272262 | orchestrator | Sunday 01 June 2025 22:58:56 +0000 (0:00:00.423) 0:00:22.833 *********** 2025-06-01 23:01:22.272268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.272278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:01:22.272290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.272296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:01:22.272306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.272313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:01:22.272323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272344 | orchestrator | 2025-06-01 23:01:22.272349 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 23:01:22.272355 | orchestrator | Sunday 01 June 2025 22:58:59 +0000 (0:00:02.458) 0:00:25.292 *********** 2025-06-01 23:01:22.272360 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.272366 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:01:22.272371 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:01:22.272377 | orchestrator | 2025-06-01 23:01:22.272383 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-01 23:01:22.272388 | orchestrator | Sunday 01 June 2025 22:58:59 +0000 (0:00:00.314) 0:00:25.606 *********** 2025-06-01 23:01:22.272395 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-01 23:01:22.272402 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-01 23:01:22.272411 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-01 23:01:22.272418 | orchestrator | 2025-06-01 23:01:22.272424 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-01 23:01:22.272430 | orchestrator | Sunday 01 June 2025 22:59:01 +0000 (0:00:02.405) 0:00:28.011 *********** 2025-06-01 23:01:22.272436 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:01:22.272442 | orchestrator | 2025-06-01 23:01:22.272448 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-01 23:01:22.272454 | orchestrator | Sunday 01 June 2025 22:59:02 +0000 (0:00:00.970) 0:00:28.982 *********** 2025-06-01 23:01:22.272459 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.272465 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:01:22.272470 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:01:22.272477 | orchestrator | 2025-06-01 23:01:22.272484 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-01 23:01:22.272490 | orchestrator | Sunday 01 June 2025 22:59:03 +0000 (0:00:00.546) 0:00:29.528 *********** 2025-06-01 23:01:22.272496 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-01 23:01:22.272502 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-01 23:01:22.272508 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:01:22.272514 | orchestrator | 2025-06-01 23:01:22.272519 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-01 23:01:22.272526 | orchestrator | Sunday 01 June 2025 22:59:04 +0000 (0:00:01.248) 0:00:30.777 *********** 2025-06-01 23:01:22.272532 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:01:22.272538 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:01:22.272549 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:01:22.272555 | orchestrator | 2025-06-01 23:01:22.272560 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-01 23:01:22.272566 | orchestrator | Sunday 01 June 2025 22:59:04 +0000 (0:00:00.324) 0:00:31.102 *********** 2025-06-01 23:01:22.272572 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-01 23:01:22.272578 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-01 23:01:22.272583 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-01 23:01:22.272589 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-01 23:01:22.272599 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-01 23:01:22.272605 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-01 23:01:22.272611 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-01 23:01:22.272617 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-01 23:01:22.272623 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-01 23:01:22.272628 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-01 23:01:22.272635 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-01 23:01:22.272691 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-01 23:01:22.272698 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-01 23:01:22.272704 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-01 23:01:22.272710 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-01 23:01:22.272716 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 23:01:22.272722 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 23:01:22.272729 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 23:01:22.272735 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 23:01:22.272741 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 23:01:22.272747 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 23:01:22.272754 | orchestrator | 2025-06-01 23:01:22.272760 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-01 23:01:22.272765 | orchestrator | Sunday 01 June 2025 22:59:14 +0000 (0:00:09.404) 0:00:40.506 *********** 2025-06-01 23:01:22.272771 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 23:01:22.272778 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 23:01:22.272784 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 23:01:22.272790 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 23:01:22.272797 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 23:01:22.272808 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 23:01:22.272815 | orchestrator | 2025-06-01 23:01:22.272821 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-01 23:01:22.272834 | orchestrator | Sunday 01 June 2025 22:59:16 +0000 (0:00:02.563) 0:00:43.069 *********** 2025-06-01 23:01:22.272841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.272852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.272860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:01:22.272867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:01:22.272924 | orchestrator | 2025-06-01 23:01:22.272930 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 23:01:22.272936 | orchestrator | Sunday 01 June 2025 22:59:19 +0000 (0:00:02.231) 0:00:45.301 *********** 2025-06-01 23:01:22.272943 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.272949 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:01:22.272954 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:01:22.272960 | orchestrator | 2025-06-01 23:01:22.272966 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-01 23:01:22.272973 | orchestrator | Sunday 01 June 2025 22:59:19 +0000 (0:00:00.297) 0:00:45.598 *********** 2025-06-01 23:01:22.272978 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:01:22.272985 | orchestrator | 2025-06-01 23:01:22.272990 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-01 23:01:22.273001 | orchestrator | Sunday 01 June 2025 22:59:21 +0000 (0:00:02.137) 0:00:47.736 *********** 2025-06-01 23:01:22.273007 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:01:22.273012 | orchestrator | 2025-06-01 23:01:22.273018 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-01 23:01:22.273024 | orchestrator | Sunday 01 June 2025 22:59:24 +0000 (0:00:03.166) 0:00:50.903 *********** 2025-06-01 23:01:22.273029 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:01:22.273035 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:01:22.273041 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:01:22.273047 | orchestrator | 2025-06-01 23:01:22.273053 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-01 23:01:22.273059 | orchestrator | Sunday 01 June 2025 22:59:25 +0000 (0:00:01.060) 0:00:51.963 *********** 2025-06-01 23:01:22.273065 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:01:22.273075 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:01:22.273081 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:01:22.273088 | orchestrator | 2025-06-01 23:01:22.273091 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-01 23:01:22.273095 | orchestrator | Sunday 01 June 2025 22:59:26 +0000 (0:00:00.561) 0:00:52.525 *********** 2025-06-01 23:01:22.273099 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.273103 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:01:22.273106 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:01:22.273110 | orchestrator | 2025-06-01 23:01:22.273114 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-01 23:01:22.273118 | orchestrator | Sunday 01 June 2025 22:59:26 +0000 (0:00:00.521) 0:00:53.046 *********** 2025-06-01 23:01:22.273121 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:01:22.273125 | orchestrator | 2025-06-01 23:01:22.273129 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-01 23:01:22.273132 | orchestrator | Sunday 01 June 2025 22:59:40 +0000 (0:00:14.032) 0:01:07.079 *********** 2025-06-01 23:01:22.273136 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:01:22.273140 | orchestrator | 2025-06-01 23:01:22.273143 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-01 23:01:22.273147 | orchestrator | Sunday 01 June 2025 22:59:50 +0000 (0:00:09.681) 0:01:16.760 *********** 2025-06-01 23:01:22.273151 | orchestrator | 2025-06-01 23:01:22.273155 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-01 23:01:22.273158 | orchestrator | Sunday 01 June 2025 22:59:50 +0000 (0:00:00.261) 0:01:17.022 *********** 2025-06-01 23:01:22.273162 | orchestrator | 2025-06-01 23:01:22.273166 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-01 23:01:22.273169 | orchestrator | Sunday 01 June 2025 22:59:50 +0000 (0:00:00.069) 0:01:17.091 *********** 2025-06-01 23:01:22.273173 | orchestrator | 2025-06-01 23:01:22.273177 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-01 23:01:22.273180 | orchestrator | Sunday 01 June 2025 22:59:50 +0000 (0:00:00.067) 0:01:17.158 *********** 2025-06-01 23:01:22.273184 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:01:22.273188 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:01:22.273192 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:01:22.273195 | orchestrator | 2025-06-01 23:01:22.273199 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-01 23:01:22.273203 | orchestrator | Sunday 01 June 2025 23:00:09 +0000 (0:00:18.308) 0:01:35.467 *********** 2025-06-01 23:01:22.273206 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:01:22.273210 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:01:22.273217 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:01:22.273221 | orchestrator | 2025-06-01 23:01:22.273225 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-01 23:01:22.273229 | orchestrator | Sunday 01 June 2025 23:00:19 +0000 (0:00:10.043) 0:01:45.511 *********** 2025-06-01 23:01:22.273232 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:01:22.273241 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:01:22.273244 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:01:22.273248 | orchestrator | 2025-06-01 23:01:22.273252 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 23:01:22.273255 | orchestrator | Sunday 01 June 2025 23:00:27 +0000 (0:00:07.789) 0:01:53.300 *********** 2025-06-01 23:01:22.273259 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:01:22.273263 | orchestrator | 2025-06-01 23:01:22.273267 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-01 23:01:22.273270 | orchestrator | Sunday 01 June 2025 23:00:27 +0000 (0:00:00.847) 0:01:54.148 *********** 2025-06-01 23:01:22.273274 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:01:22.273278 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:01:22.273282 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:01:22.273285 | orchestrator | 2025-06-01 23:01:22.273289 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-01 23:01:22.273293 | orchestrator | Sunday 01 June 2025 23:00:28 +0000 (0:00:00.710) 0:01:54.859 *********** 2025-06-01 23:01:22.273296 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:01:22.273300 | orchestrator | 2025-06-01 23:01:22.273304 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-01 23:01:22.273307 | orchestrator | Sunday 01 June 2025 23:00:30 +0000 (0:00:01.759) 0:01:56.618 *********** 2025-06-01 23:01:22.273311 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-01 23:01:22.273315 | orchestrator | 2025-06-01 23:01:22.273319 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-01 23:01:22.273323 | orchestrator | Sunday 01 June 2025 23:00:40 +0000 (0:00:10.176) 0:02:06.795 *********** 2025-06-01 23:01:22.273326 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-01 23:01:22.273330 | orchestrator | 2025-06-01 23:01:22.273334 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-01 23:01:22.273337 | orchestrator | Sunday 01 June 2025 23:01:07 +0000 (0:00:26.470) 0:02:33.265 *********** 2025-06-01 23:01:22.273341 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-01 23:01:22.273345 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-01 23:01:22.273349 | orchestrator | 2025-06-01 23:01:22.273352 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-01 23:01:22.273356 | orchestrator | Sunday 01 June 2025 23:01:12 +0000 (0:00:05.582) 0:02:38.848 *********** 2025-06-01 23:01:22.273360 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.273363 | orchestrator | 2025-06-01 23:01:22.273367 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-01 23:01:22.273371 | orchestrator | Sunday 01 June 2025 23:01:13 +0000 (0:00:01.079) 0:02:39.927 *********** 2025-06-01 23:01:22.273374 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.273378 | orchestrator | 2025-06-01 23:01:22.273382 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-01 23:01:22.273388 | orchestrator | Sunday 01 June 2025 23:01:13 +0000 (0:00:00.280) 0:02:40.209 *********** 2025-06-01 23:01:22.273392 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.273396 | orchestrator | 2025-06-01 23:01:22.273399 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-01 23:01:22.273403 | orchestrator | Sunday 01 June 2025 23:01:14 +0000 (0:00:00.402) 0:02:40.613 *********** 2025-06-01 23:01:22.273407 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.273411 | orchestrator | 2025-06-01 23:01:22.273414 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-01 23:01:22.273418 | orchestrator | Sunday 01 June 2025 23:01:15 +0000 (0:00:00.862) 0:02:41.476 *********** 2025-06-01 23:01:22.273422 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:01:22.273426 | orchestrator | 2025-06-01 23:01:22.273429 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 23:01:22.273438 | orchestrator | Sunday 01 June 2025 23:01:18 +0000 (0:00:02.829) 0:02:44.305 *********** 2025-06-01 23:01:22.273442 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:01:22.273446 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:01:22.273450 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:01:22.273453 | orchestrator | 2025-06-01 23:01:22.273457 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:01:22.273461 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-01 23:01:22.273466 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-01 23:01:22.273470 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-01 23:01:22.273474 | orchestrator | 2025-06-01 23:01:22.273478 | orchestrator | 2025-06-01 23:01:22.273481 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:01:22.273485 | orchestrator | Sunday 01 June 2025 23:01:19 +0000 (0:00:01.384) 0:02:45.689 *********** 2025-06-01 23:01:22.273489 | orchestrator | =============================================================================== 2025-06-01 23:01:22.273493 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.47s 2025-06-01 23:01:22.273500 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 18.31s 2025-06-01 23:01:22.273504 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.03s 2025-06-01 23:01:22.273507 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.18s 2025-06-01 23:01:22.273511 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.04s 2025-06-01 23:01:22.273515 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.68s 2025-06-01 23:01:22.273518 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.40s 2025-06-01 23:01:22.273522 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.79s 2025-06-01 23:01:22.273526 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.77s 2025-06-01 23:01:22.273529 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.58s 2025-06-01 23:01:22.273533 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.46s 2025-06-01 23:01:22.273537 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.41s 2025-06-01 23:01:22.273540 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 3.17s 2025-06-01 23:01:22.273544 | orchestrator | keystone : Creating default user role ----------------------------------- 2.83s 2025-06-01 23:01:22.273548 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.56s 2025-06-01 23:01:22.273552 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.46s 2025-06-01 23:01:22.273555 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.41s 2025-06-01 23:01:22.273559 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.23s 2025-06-01 23:01:22.273563 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.14s 2025-06-01 23:01:22.273566 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.76s 2025-06-01 23:01:22.273570 | orchestrator | 2025-06-01 23:01:22 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:22.273574 | orchestrator | 2025-06-01 23:01:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:25.297020 | orchestrator | 2025-06-01 23:01:25 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:25.298115 | orchestrator | 2025-06-01 23:01:25 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:25.300119 | orchestrator | 2025-06-01 23:01:25 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state STARTED 2025-06-01 23:01:25.301757 | orchestrator | 2025-06-01 23:01:25 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:25.301972 | orchestrator | 2025-06-01 23:01:25 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:25.302006 | orchestrator | 2025-06-01 23:01:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:28.336472 | orchestrator | 2025-06-01 23:01:28 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:28.338349 | orchestrator | 2025-06-01 23:01:28 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:28.339093 | orchestrator | 2025-06-01 23:01:28 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:01:28.339610 | orchestrator | 2025-06-01 23:01:28 | INFO  | Task 483976cf-99c0-4a38-9e79-cb038a956b81 is in state SUCCESS 2025-06-01 23:01:28.340320 | orchestrator | 2025-06-01 23:01:28 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:28.342365 | orchestrator | 2025-06-01 23:01:28 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:28.342413 | orchestrator | 2025-06-01 23:01:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:31.383881 | orchestrator | 2025-06-01 23:01:31 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:31.385045 | orchestrator | 2025-06-01 23:01:31 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:31.385775 | orchestrator | 2025-06-01 23:01:31 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:01:31.386351 | orchestrator | 2025-06-01 23:01:31 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:31.387196 | orchestrator | 2025-06-01 23:01:31 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:31.387386 | orchestrator | 2025-06-01 23:01:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:34.426836 | orchestrator | 2025-06-01 23:01:34 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:34.426963 | orchestrator | 2025-06-01 23:01:34 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:34.431018 | orchestrator | 2025-06-01 23:01:34 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:01:34.431057 | orchestrator | 2025-06-01 23:01:34 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:34.437232 | orchestrator | 2025-06-01 23:01:34 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:34.437259 | orchestrator | 2025-06-01 23:01:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:37.487618 | orchestrator | 2025-06-01 23:01:37 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:37.487810 | orchestrator | 2025-06-01 23:01:37 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:37.487835 | orchestrator | 2025-06-01 23:01:37 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:01:37.487857 | orchestrator | 2025-06-01 23:01:37 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:37.487918 | orchestrator | 2025-06-01 23:01:37 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:37.487939 | orchestrator | 2025-06-01 23:01:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:40.531052 | orchestrator | 2025-06-01 23:01:40 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:40.531439 | orchestrator | 2025-06-01 23:01:40 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:40.532136 | orchestrator | 2025-06-01 23:01:40 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:01:40.532891 | orchestrator | 2025-06-01 23:01:40 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:40.534436 | orchestrator | 2025-06-01 23:01:40 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:40.534460 | orchestrator | 2025-06-01 23:01:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:43.567076 | orchestrator | 2025-06-01 23:01:43 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:43.571183 | orchestrator | 2025-06-01 23:01:43 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:43.571991 | orchestrator | 2025-06-01 23:01:43 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:01:43.572739 | orchestrator | 2025-06-01 23:01:43 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:43.573510 | orchestrator | 2025-06-01 23:01:43 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:43.573539 | orchestrator | 2025-06-01 23:01:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:46.606553 | orchestrator | 2025-06-01 23:01:46 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:46.607512 | orchestrator | 2025-06-01 23:01:46 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:46.607538 | orchestrator | 2025-06-01 23:01:46 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:01:46.608249 | orchestrator | 2025-06-01 23:01:46 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:46.608707 | orchestrator | 2025-06-01 23:01:46 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:46.608814 | orchestrator | 2025-06-01 23:01:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:49.649377 | orchestrator | 2025-06-01 23:01:49 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:49.649716 | orchestrator | 2025-06-01 23:01:49 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:49.650630 | orchestrator | 2025-06-01 23:01:49 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:01:49.651533 | orchestrator | 2025-06-01 23:01:49 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:49.653168 | orchestrator | 2025-06-01 23:01:49 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:49.653187 | orchestrator | 2025-06-01 23:01:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:52.694769 | orchestrator | 2025-06-01 23:01:52 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:52.695040 | orchestrator | 2025-06-01 23:01:52 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:52.695874 | orchestrator | 2025-06-01 23:01:52 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:01:52.699497 | orchestrator | 2025-06-01 23:01:52 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:52.700509 | orchestrator | 2025-06-01 23:01:52 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:52.700533 | orchestrator | 2025-06-01 23:01:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:55.729145 | orchestrator | 2025-06-01 23:01:55 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:55.729292 | orchestrator | 2025-06-01 23:01:55 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:55.729773 | orchestrator | 2025-06-01 23:01:55 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:01:55.730289 | orchestrator | 2025-06-01 23:01:55 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:55.731410 | orchestrator | 2025-06-01 23:01:55 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:55.731431 | orchestrator | 2025-06-01 23:01:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:01:58.768626 | orchestrator | 2025-06-01 23:01:58 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:01:58.768932 | orchestrator | 2025-06-01 23:01:58 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:01:58.769465 | orchestrator | 2025-06-01 23:01:58 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:01:58.770238 | orchestrator | 2025-06-01 23:01:58 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:01:58.770692 | orchestrator | 2025-06-01 23:01:58 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:01:58.770715 | orchestrator | 2025-06-01 23:01:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:01.802261 | orchestrator | 2025-06-01 23:02:01 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:01.804294 | orchestrator | 2025-06-01 23:02:01 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:01.804791 | orchestrator | 2025-06-01 23:02:01 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:01.806863 | orchestrator | 2025-06-01 23:02:01 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:02:01.807796 | orchestrator | 2025-06-01 23:02:01 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:01.807824 | orchestrator | 2025-06-01 23:02:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:04.844926 | orchestrator | 2025-06-01 23:02:04 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:04.846215 | orchestrator | 2025-06-01 23:02:04 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:04.846249 | orchestrator | 2025-06-01 23:02:04 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:04.847173 | orchestrator | 2025-06-01 23:02:04 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state STARTED 2025-06-01 23:02:04.848506 | orchestrator | 2025-06-01 23:02:04 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:04.848545 | orchestrator | 2025-06-01 23:02:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:07.875347 | orchestrator | 2025-06-01 23:02:07 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:07.875492 | orchestrator | 2025-06-01 23:02:07 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:07.876252 | orchestrator | 2025-06-01 23:02:07 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:07.876696 | orchestrator | 2025-06-01 23:02:07 | INFO  | Task 3bffeaf3-c713-4a36-bcbc-e72bcdf9090d is in state SUCCESS 2025-06-01 23:02:07.877007 | orchestrator | 2025-06-01 23:02:07.877031 | orchestrator | 2025-06-01 23:02:07.877043 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:02:07.877055 | orchestrator | 2025-06-01 23:02:07.877067 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:02:07.877079 | orchestrator | Sunday 01 June 2025 23:00:54 +0000 (0:00:00.258) 0:00:00.258 *********** 2025-06-01 23:02:07.877090 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:02:07.877117 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:02:07.877129 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:02:07.877140 | orchestrator | ok: [testbed-manager] 2025-06-01 23:02:07.877151 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:02:07.877162 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:02:07.877173 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:02:07.877185 | orchestrator | 2025-06-01 23:02:07.877197 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:02:07.877208 | orchestrator | Sunday 01 June 2025 23:00:55 +0000 (0:00:01.078) 0:00:01.337 *********** 2025-06-01 23:02:07.877220 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-01 23:02:07.877232 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-01 23:02:07.877243 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-01 23:02:07.877254 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-01 23:02:07.877265 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-01 23:02:07.877277 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-01 23:02:07.877288 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-01 23:02:07.877299 | orchestrator | 2025-06-01 23:02:07.877310 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-01 23:02:07.877321 | orchestrator | 2025-06-01 23:02:07.877333 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-01 23:02:07.877344 | orchestrator | Sunday 01 June 2025 23:00:56 +0000 (0:00:01.531) 0:00:02.868 *********** 2025-06-01 23:02:07.877356 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:02:07.877368 | orchestrator | 2025-06-01 23:02:07.877380 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-01 23:02:07.877391 | orchestrator | Sunday 01 June 2025 23:00:58 +0000 (0:00:01.434) 0:00:04.303 *********** 2025-06-01 23:02:07.877402 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-01 23:02:07.877413 | orchestrator | 2025-06-01 23:02:07.877424 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-01 23:02:07.877435 | orchestrator | Sunday 01 June 2025 23:01:01 +0000 (0:00:03.593) 0:00:07.896 *********** 2025-06-01 23:02:07.877578 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-01 23:02:07.877593 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-01 23:02:07.877604 | orchestrator | 2025-06-01 23:02:07.877616 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-01 23:02:07.877627 | orchestrator | Sunday 01 June 2025 23:01:08 +0000 (0:00:06.157) 0:00:14.054 *********** 2025-06-01 23:02:07.877638 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-01 23:02:07.877685 | orchestrator | 2025-06-01 23:02:07.877697 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-01 23:02:07.877708 | orchestrator | Sunday 01 June 2025 23:01:11 +0000 (0:00:02.927) 0:00:16.981 *********** 2025-06-01 23:02:07.877719 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 23:02:07.877731 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-01 23:02:07.877741 | orchestrator | 2025-06-01 23:02:07.877753 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-01 23:02:07.877763 | orchestrator | Sunday 01 June 2025 23:01:14 +0000 (0:00:03.386) 0:00:20.368 *********** 2025-06-01 23:02:07.877774 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 23:02:07.877786 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-01 23:02:07.877797 | orchestrator | 2025-06-01 23:02:07.877808 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-01 23:02:07.877818 | orchestrator | Sunday 01 June 2025 23:01:20 +0000 (0:00:05.999) 0:00:26.367 *********** 2025-06-01 23:02:07.877829 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-01 23:02:07.877840 | orchestrator | 2025-06-01 23:02:07.877851 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:02:07.877862 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:02:07.877874 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:02:07.877885 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:02:07.877896 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:02:07.877908 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:02:07.877931 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:02:07.877943 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:02:07.877954 | orchestrator | 2025-06-01 23:02:07.877965 | orchestrator | 2025-06-01 23:02:07.877976 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:02:07.877993 | orchestrator | Sunday 01 June 2025 23:01:25 +0000 (0:00:04.726) 0:00:31.094 *********** 2025-06-01 23:02:07.878005 | orchestrator | =============================================================================== 2025-06-01 23:02:07.878068 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.16s 2025-06-01 23:02:07.878083 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.00s 2025-06-01 23:02:07.878094 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.73s 2025-06-01 23:02:07.878105 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.59s 2025-06-01 23:02:07.878116 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.39s 2025-06-01 23:02:07.878127 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.93s 2025-06-01 23:02:07.878137 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.53s 2025-06-01 23:02:07.878148 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.43s 2025-06-01 23:02:07.878159 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.08s 2025-06-01 23:02:07.878170 | orchestrator | 2025-06-01 23:02:07.878181 | orchestrator | 2025-06-01 23:02:07.878192 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-01 23:02:07.878211 | orchestrator | 2025-06-01 23:02:07.878224 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-01 23:02:07.878238 | orchestrator | Sunday 01 June 2025 23:00:46 +0000 (0:00:00.276) 0:00:00.276 *********** 2025-06-01 23:02:07.878251 | orchestrator | changed: [testbed-manager] 2025-06-01 23:02:07.878263 | orchestrator | 2025-06-01 23:02:07.878276 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-01 23:02:07.878288 | orchestrator | Sunday 01 June 2025 23:00:48 +0000 (0:00:02.075) 0:00:02.351 *********** 2025-06-01 23:02:07.878301 | orchestrator | changed: [testbed-manager] 2025-06-01 23:02:07.878313 | orchestrator | 2025-06-01 23:02:07.878325 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-01 23:02:07.878339 | orchestrator | Sunday 01 June 2025 23:00:49 +0000 (0:00:01.111) 0:00:03.463 *********** 2025-06-01 23:02:07.878351 | orchestrator | changed: [testbed-manager] 2025-06-01 23:02:07.878366 | orchestrator | 2025-06-01 23:02:07.878383 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-01 23:02:07.878401 | orchestrator | Sunday 01 June 2025 23:00:50 +0000 (0:00:01.126) 0:00:04.589 *********** 2025-06-01 23:02:07.878418 | orchestrator | changed: [testbed-manager] 2025-06-01 23:02:07.878436 | orchestrator | 2025-06-01 23:02:07.878453 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-01 23:02:07.878471 | orchestrator | Sunday 01 June 2025 23:00:52 +0000 (0:00:01.204) 0:00:05.794 *********** 2025-06-01 23:02:07.878488 | orchestrator | changed: [testbed-manager] 2025-06-01 23:02:07.878507 | orchestrator | 2025-06-01 23:02:07.878525 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-01 23:02:07.878543 | orchestrator | Sunday 01 June 2025 23:00:53 +0000 (0:00:01.195) 0:00:06.990 *********** 2025-06-01 23:02:07.878559 | orchestrator | changed: [testbed-manager] 2025-06-01 23:02:07.878570 | orchestrator | 2025-06-01 23:02:07.878580 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-01 23:02:07.878591 | orchestrator | Sunday 01 June 2025 23:00:54 +0000 (0:00:01.009) 0:00:07.999 *********** 2025-06-01 23:02:07.878602 | orchestrator | changed: [testbed-manager] 2025-06-01 23:02:07.878613 | orchestrator | 2025-06-01 23:02:07.878624 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-01 23:02:07.878635 | orchestrator | Sunday 01 June 2025 23:00:55 +0000 (0:00:01.069) 0:00:09.069 *********** 2025-06-01 23:02:07.878670 | orchestrator | changed: [testbed-manager] 2025-06-01 23:02:07.878682 | orchestrator | 2025-06-01 23:02:07.878693 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-01 23:02:07.878704 | orchestrator | Sunday 01 June 2025 23:00:56 +0000 (0:00:01.090) 0:00:10.159 *********** 2025-06-01 23:02:07.878715 | orchestrator | changed: [testbed-manager] 2025-06-01 23:02:07.878726 | orchestrator | 2025-06-01 23:02:07.878737 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-01 23:02:07.878747 | orchestrator | Sunday 01 June 2025 23:01:40 +0000 (0:00:44.435) 0:00:54.595 *********** 2025-06-01 23:02:07.878758 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:02:07.878769 | orchestrator | 2025-06-01 23:02:07.878779 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-01 23:02:07.878790 | orchestrator | 2025-06-01 23:02:07.878801 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-01 23:02:07.878812 | orchestrator | Sunday 01 June 2025 23:01:40 +0000 (0:00:00.167) 0:00:54.763 *********** 2025-06-01 23:02:07.878822 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:02:07.878833 | orchestrator | 2025-06-01 23:02:07.878844 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-01 23:02:07.878855 | orchestrator | 2025-06-01 23:02:07.878866 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-01 23:02:07.878876 | orchestrator | Sunday 01 June 2025 23:01:42 +0000 (0:00:01.673) 0:00:56.437 *********** 2025-06-01 23:02:07.878887 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:02:07.878898 | orchestrator | 2025-06-01 23:02:07.878918 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-01 23:02:07.878930 | orchestrator | 2025-06-01 23:02:07.878941 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-01 23:02:07.878952 | orchestrator | Sunday 01 June 2025 23:01:53 +0000 (0:00:11.203) 0:01:07.641 *********** 2025-06-01 23:02:07.878962 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:02:07.878973 | orchestrator | 2025-06-01 23:02:07.878995 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:02:07.879007 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 23:02:07.879025 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:02:07.879036 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:02:07.879047 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:02:07.879058 | orchestrator | 2025-06-01 23:02:07.879069 | orchestrator | 2025-06-01 23:02:07.879080 | orchestrator | 2025-06-01 23:02:07.879091 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:02:07.879101 | orchestrator | Sunday 01 June 2025 23:02:05 +0000 (0:00:11.146) 0:01:18.787 *********** 2025-06-01 23:02:07.879112 | orchestrator | =============================================================================== 2025-06-01 23:02:07.879123 | orchestrator | Create admin user ------------------------------------------------------ 44.44s 2025-06-01 23:02:07.879134 | orchestrator | Restart ceph manager service ------------------------------------------- 24.02s 2025-06-01 23:02:07.879144 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.08s 2025-06-01 23:02:07.879155 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.20s 2025-06-01 23:02:07.879166 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.20s 2025-06-01 23:02:07.879177 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.13s 2025-06-01 23:02:07.879188 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.11s 2025-06-01 23:02:07.879198 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.09s 2025-06-01 23:02:07.879209 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.07s 2025-06-01 23:02:07.879220 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.01s 2025-06-01 23:02:07.879231 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2025-06-01 23:02:07.879241 | orchestrator | 2025-06-01 23:02:07 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:07.879253 | orchestrator | 2025-06-01 23:02:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:10.905868 | orchestrator | 2025-06-01 23:02:10 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:10.906186 | orchestrator | 2025-06-01 23:02:10 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:10.907047 | orchestrator | 2025-06-01 23:02:10 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:10.907772 | orchestrator | 2025-06-01 23:02:10 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:10.907795 | orchestrator | 2025-06-01 23:02:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:13.947327 | orchestrator | 2025-06-01 23:02:13 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:13.949025 | orchestrator | 2025-06-01 23:02:13 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:13.949554 | orchestrator | 2025-06-01 23:02:13 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:13.951706 | orchestrator | 2025-06-01 23:02:13 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:13.951726 | orchestrator | 2025-06-01 23:02:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:16.990300 | orchestrator | 2025-06-01 23:02:16 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:16.991000 | orchestrator | 2025-06-01 23:02:16 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:16.991644 | orchestrator | 2025-06-01 23:02:16 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:16.992715 | orchestrator | 2025-06-01 23:02:16 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:16.992750 | orchestrator | 2025-06-01 23:02:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:20.036885 | orchestrator | 2025-06-01 23:02:20 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:20.037034 | orchestrator | 2025-06-01 23:02:20 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:20.037982 | orchestrator | 2025-06-01 23:02:20 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:20.039493 | orchestrator | 2025-06-01 23:02:20 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:20.039518 | orchestrator | 2025-06-01 23:02:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:23.075019 | orchestrator | 2025-06-01 23:02:23 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:23.078955 | orchestrator | 2025-06-01 23:02:23 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:23.078991 | orchestrator | 2025-06-01 23:02:23 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:23.079003 | orchestrator | 2025-06-01 23:02:23 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:23.079015 | orchestrator | 2025-06-01 23:02:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:26.117950 | orchestrator | 2025-06-01 23:02:26 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:26.121403 | orchestrator | 2025-06-01 23:02:26 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:26.123882 | orchestrator | 2025-06-01 23:02:26 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:26.123911 | orchestrator | 2025-06-01 23:02:26 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:26.123918 | orchestrator | 2025-06-01 23:02:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:29.168331 | orchestrator | 2025-06-01 23:02:29 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:29.169521 | orchestrator | 2025-06-01 23:02:29 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:29.170799 | orchestrator | 2025-06-01 23:02:29 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:29.171679 | orchestrator | 2025-06-01 23:02:29 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:29.171704 | orchestrator | 2025-06-01 23:02:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:32.225823 | orchestrator | 2025-06-01 23:02:32 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:32.226413 | orchestrator | 2025-06-01 23:02:32 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:32.230479 | orchestrator | 2025-06-01 23:02:32 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:32.232044 | orchestrator | 2025-06-01 23:02:32 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:32.232068 | orchestrator | 2025-06-01 23:02:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:35.269694 | orchestrator | 2025-06-01 23:02:35 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:35.269925 | orchestrator | 2025-06-01 23:02:35 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:35.271077 | orchestrator | 2025-06-01 23:02:35 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:35.272077 | orchestrator | 2025-06-01 23:02:35 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:35.272338 | orchestrator | 2025-06-01 23:02:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:38.336945 | orchestrator | 2025-06-01 23:02:38 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:38.337080 | orchestrator | 2025-06-01 23:02:38 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:38.337998 | orchestrator | 2025-06-01 23:02:38 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:38.338987 | orchestrator | 2025-06-01 23:02:38 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:38.339450 | orchestrator | 2025-06-01 23:02:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:41.368974 | orchestrator | 2025-06-01 23:02:41 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:41.369112 | orchestrator | 2025-06-01 23:02:41 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:41.369884 | orchestrator | 2025-06-01 23:02:41 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:41.370561 | orchestrator | 2025-06-01 23:02:41 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:41.370586 | orchestrator | 2025-06-01 23:02:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:44.397409 | orchestrator | 2025-06-01 23:02:44 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:44.397559 | orchestrator | 2025-06-01 23:02:44 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:44.397801 | orchestrator | 2025-06-01 23:02:44 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:44.398512 | orchestrator | 2025-06-01 23:02:44 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:44.398536 | orchestrator | 2025-06-01 23:02:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:47.428887 | orchestrator | 2025-06-01 23:02:47 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:47.432987 | orchestrator | 2025-06-01 23:02:47 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:47.434000 | orchestrator | 2025-06-01 23:02:47 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:47.436435 | orchestrator | 2025-06-01 23:02:47 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:47.436466 | orchestrator | 2025-06-01 23:02:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:50.478122 | orchestrator | 2025-06-01 23:02:50 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:50.479952 | orchestrator | 2025-06-01 23:02:50 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:50.480369 | orchestrator | 2025-06-01 23:02:50 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:50.482111 | orchestrator | 2025-06-01 23:02:50 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:50.482227 | orchestrator | 2025-06-01 23:02:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:53.536141 | orchestrator | 2025-06-01 23:02:53 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:53.536255 | orchestrator | 2025-06-01 23:02:53 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:53.536269 | orchestrator | 2025-06-01 23:02:53 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:53.536281 | orchestrator | 2025-06-01 23:02:53 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:53.536293 | orchestrator | 2025-06-01 23:02:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:56.629190 | orchestrator | 2025-06-01 23:02:56 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:56.631518 | orchestrator | 2025-06-01 23:02:56 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:56.631551 | orchestrator | 2025-06-01 23:02:56 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:56.633021 | orchestrator | 2025-06-01 23:02:56 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:56.633044 | orchestrator | 2025-06-01 23:02:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:02:59.715902 | orchestrator | 2025-06-01 23:02:59 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:02:59.717488 | orchestrator | 2025-06-01 23:02:59 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:02:59.719520 | orchestrator | 2025-06-01 23:02:59 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:02:59.722001 | orchestrator | 2025-06-01 23:02:59 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:02:59.722135 | orchestrator | 2025-06-01 23:02:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:02.780123 | orchestrator | 2025-06-01 23:03:02 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:02.784353 | orchestrator | 2025-06-01 23:03:02 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:02.786227 | orchestrator | 2025-06-01 23:03:02 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:02.787974 | orchestrator | 2025-06-01 23:03:02 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:02.787995 | orchestrator | 2025-06-01 23:03:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:05.843767 | orchestrator | 2025-06-01 23:03:05 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:05.844899 | orchestrator | 2025-06-01 23:03:05 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:05.847436 | orchestrator | 2025-06-01 23:03:05 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:05.848443 | orchestrator | 2025-06-01 23:03:05 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:05.848467 | orchestrator | 2025-06-01 23:03:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:08.905923 | orchestrator | 2025-06-01 23:03:08 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:08.907621 | orchestrator | 2025-06-01 23:03:08 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:08.909182 | orchestrator | 2025-06-01 23:03:08 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:08.911949 | orchestrator | 2025-06-01 23:03:08 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:08.911973 | orchestrator | 2025-06-01 23:03:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:11.960055 | orchestrator | 2025-06-01 23:03:11 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:11.960518 | orchestrator | 2025-06-01 23:03:11 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:11.961521 | orchestrator | 2025-06-01 23:03:11 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:11.962610 | orchestrator | 2025-06-01 23:03:11 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:11.962633 | orchestrator | 2025-06-01 23:03:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:15.013617 | orchestrator | 2025-06-01 23:03:15 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:15.018097 | orchestrator | 2025-06-01 23:03:15 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:15.019770 | orchestrator | 2025-06-01 23:03:15 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:15.024353 | orchestrator | 2025-06-01 23:03:15 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:15.024691 | orchestrator | 2025-06-01 23:03:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:18.085396 | orchestrator | 2025-06-01 23:03:18 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:18.085522 | orchestrator | 2025-06-01 23:03:18 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:18.088550 | orchestrator | 2025-06-01 23:03:18 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:18.088575 | orchestrator | 2025-06-01 23:03:18 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:18.088586 | orchestrator | 2025-06-01 23:03:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:21.136762 | orchestrator | 2025-06-01 23:03:21 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:21.137458 | orchestrator | 2025-06-01 23:03:21 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:21.137904 | orchestrator | 2025-06-01 23:03:21 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:21.141448 | orchestrator | 2025-06-01 23:03:21 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:21.141473 | orchestrator | 2025-06-01 23:03:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:24.177824 | orchestrator | 2025-06-01 23:03:24 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:24.178872 | orchestrator | 2025-06-01 23:03:24 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:24.179831 | orchestrator | 2025-06-01 23:03:24 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:24.182381 | orchestrator | 2025-06-01 23:03:24 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:24.182406 | orchestrator | 2025-06-01 23:03:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:27.244531 | orchestrator | 2025-06-01 23:03:27 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:27.245173 | orchestrator | 2025-06-01 23:03:27 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:27.246971 | orchestrator | 2025-06-01 23:03:27 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:27.248218 | orchestrator | 2025-06-01 23:03:27 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:27.248241 | orchestrator | 2025-06-01 23:03:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:30.302573 | orchestrator | 2025-06-01 23:03:30 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:30.305062 | orchestrator | 2025-06-01 23:03:30 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:30.307589 | orchestrator | 2025-06-01 23:03:30 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:30.309432 | orchestrator | 2025-06-01 23:03:30 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:30.309456 | orchestrator | 2025-06-01 23:03:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:33.369809 | orchestrator | 2025-06-01 23:03:33 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:33.371372 | orchestrator | 2025-06-01 23:03:33 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:33.371405 | orchestrator | 2025-06-01 23:03:33 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:33.374415 | orchestrator | 2025-06-01 23:03:33 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:33.374438 | orchestrator | 2025-06-01 23:03:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:36.435005 | orchestrator | 2025-06-01 23:03:36 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:36.437459 | orchestrator | 2025-06-01 23:03:36 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:36.437501 | orchestrator | 2025-06-01 23:03:36 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:36.437990 | orchestrator | 2025-06-01 23:03:36 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:36.438010 | orchestrator | 2025-06-01 23:03:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:39.494405 | orchestrator | 2025-06-01 23:03:39 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:39.495817 | orchestrator | 2025-06-01 23:03:39 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:39.497299 | orchestrator | 2025-06-01 23:03:39 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:39.498952 | orchestrator | 2025-06-01 23:03:39 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:39.500043 | orchestrator | 2025-06-01 23:03:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:42.538250 | orchestrator | 2025-06-01 23:03:42 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:42.539113 | orchestrator | 2025-06-01 23:03:42 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:42.542495 | orchestrator | 2025-06-01 23:03:42 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:42.543904 | orchestrator | 2025-06-01 23:03:42 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:42.543938 | orchestrator | 2025-06-01 23:03:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:45.590108 | orchestrator | 2025-06-01 23:03:45 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:45.590503 | orchestrator | 2025-06-01 23:03:45 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:45.591551 | orchestrator | 2025-06-01 23:03:45 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:45.592569 | orchestrator | 2025-06-01 23:03:45 | INFO  | Task 2447e620-9723-425c-bd09-8af67cb02d01 is in state STARTED 2025-06-01 23:03:45.594097 | orchestrator | 2025-06-01 23:03:45 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:45.594131 | orchestrator | 2025-06-01 23:03:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:48.627480 | orchestrator | 2025-06-01 23:03:48 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:48.628907 | orchestrator | 2025-06-01 23:03:48 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:48.631208 | orchestrator | 2025-06-01 23:03:48 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:48.631514 | orchestrator | 2025-06-01 23:03:48 | INFO  | Task 2447e620-9723-425c-bd09-8af67cb02d01 is in state STARTED 2025-06-01 23:03:48.633424 | orchestrator | 2025-06-01 23:03:48 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:48.633889 | orchestrator | 2025-06-01 23:03:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:51.693531 | orchestrator | 2025-06-01 23:03:51 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:51.697455 | orchestrator | 2025-06-01 23:03:51 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:51.699820 | orchestrator | 2025-06-01 23:03:51 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:51.702816 | orchestrator | 2025-06-01 23:03:51 | INFO  | Task 2447e620-9723-425c-bd09-8af67cb02d01 is in state STARTED 2025-06-01 23:03:51.705210 | orchestrator | 2025-06-01 23:03:51 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:51.705235 | orchestrator | 2025-06-01 23:03:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:54.759227 | orchestrator | 2025-06-01 23:03:54 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:54.760244 | orchestrator | 2025-06-01 23:03:54 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:54.765038 | orchestrator | 2025-06-01 23:03:54 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:54.767967 | orchestrator | 2025-06-01 23:03:54 | INFO  | Task 2447e620-9723-425c-bd09-8af67cb02d01 is in state STARTED 2025-06-01 23:03:54.770600 | orchestrator | 2025-06-01 23:03:54 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:54.770625 | orchestrator | 2025-06-01 23:03:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:03:57.850243 | orchestrator | 2025-06-01 23:03:57 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state STARTED 2025-06-01 23:03:57.850391 | orchestrator | 2025-06-01 23:03:57 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:03:57.852249 | orchestrator | 2025-06-01 23:03:57 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:03:57.853518 | orchestrator | 2025-06-01 23:03:57 | INFO  | Task 2447e620-9723-425c-bd09-8af67cb02d01 is in state STARTED 2025-06-01 23:03:57.854752 | orchestrator | 2025-06-01 23:03:57 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:03:57.854779 | orchestrator | 2025-06-01 23:03:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:00.905458 | orchestrator | 2025-06-01 23:04:00 | INFO  | Task bcc77621-d366-4abd-90dd-77bc2f784b91 is in state SUCCESS 2025-06-01 23:04:00.906353 | orchestrator | 2025-06-01 23:04:00.906426 | orchestrator | 2025-06-01 23:04:00.906435 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:04:00.906443 | orchestrator | 2025-06-01 23:04:00.906449 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:04:00.906457 | orchestrator | Sunday 01 June 2025 23:00:54 +0000 (0:00:00.294) 0:00:00.294 *********** 2025-06-01 23:04:00.906464 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:04:00.906473 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:04:00.906480 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:04:00.906487 | orchestrator | 2025-06-01 23:04:00.906493 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:04:00.906500 | orchestrator | Sunday 01 June 2025 23:00:55 +0000 (0:00:00.332) 0:00:00.626 *********** 2025-06-01 23:04:00.906507 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-01 23:04:00.906514 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-01 23:04:00.906521 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-01 23:04:00.906528 | orchestrator | 2025-06-01 23:04:00.906534 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-01 23:04:00.906541 | orchestrator | 2025-06-01 23:04:00.906548 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-01 23:04:00.906554 | orchestrator | Sunday 01 June 2025 23:00:56 +0000 (0:00:00.995) 0:00:01.622 *********** 2025-06-01 23:04:00.906561 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:04:00.906568 | orchestrator | 2025-06-01 23:04:00.906575 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-01 23:04:00.906582 | orchestrator | Sunday 01 June 2025 23:00:56 +0000 (0:00:00.849) 0:00:02.471 *********** 2025-06-01 23:04:00.906588 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-01 23:04:00.906595 | orchestrator | 2025-06-01 23:04:00.906601 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-01 23:04:00.906626 | orchestrator | Sunday 01 June 2025 23:01:06 +0000 (0:00:09.994) 0:00:12.465 *********** 2025-06-01 23:04:00.906634 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-01 23:04:00.906641 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-01 23:04:00.906648 | orchestrator | 2025-06-01 23:04:00.906655 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-01 23:04:00.906662 | orchestrator | Sunday 01 June 2025 23:01:12 +0000 (0:00:05.583) 0:00:18.049 *********** 2025-06-01 23:04:00.906729 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 23:04:00.906739 | orchestrator | 2025-06-01 23:04:00.906745 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-01 23:04:00.906752 | orchestrator | Sunday 01 June 2025 23:01:15 +0000 (0:00:02.879) 0:00:20.929 *********** 2025-06-01 23:04:00.906759 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 23:04:00.906766 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-01 23:04:00.906773 | orchestrator | 2025-06-01 23:04:00.906780 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-01 23:04:00.906786 | orchestrator | Sunday 01 June 2025 23:01:18 +0000 (0:00:03.525) 0:00:24.454 *********** 2025-06-01 23:04:00.906793 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 23:04:00.906800 | orchestrator | 2025-06-01 23:04:00.906806 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-01 23:04:00.906813 | orchestrator | Sunday 01 June 2025 23:01:21 +0000 (0:00:03.026) 0:00:27.480 *********** 2025-06-01 23:04:00.906820 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-01 23:04:00.906826 | orchestrator | 2025-06-01 23:04:00.906833 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-01 23:04:00.906839 | orchestrator | Sunday 01 June 2025 23:01:26 +0000 (0:00:04.108) 0:00:31.588 *********** 2025-06-01 23:04:00.906865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:04:00.906881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:04:00.906896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:04:00.906904 | orchestrator | 2025-06-01 23:04:00.906911 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-01 23:04:00.906918 | orchestrator | Sunday 01 June 2025 23:01:30 +0000 (0:00:04.370) 0:00:35.959 *********** 2025-06-01 23:04:00.906929 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:04:00.906936 | orchestrator | 2025-06-01 23:04:00.906943 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-01 23:04:00.906949 | orchestrator | Sunday 01 June 2025 23:01:31 +0000 (0:00:00.577) 0:00:36.536 *********** 2025-06-01 23:04:00.906956 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:04:00.906965 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:04:00.906973 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:00.906981 | orchestrator | 2025-06-01 23:04:00.906988 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-01 23:04:00.906996 | orchestrator | Sunday 01 June 2025 23:01:35 +0000 (0:00:04.393) 0:00:40.930 *********** 2025-06-01 23:04:00.907004 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 23:04:00.907012 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 23:04:00.907020 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 23:04:00.907033 | orchestrator | 2025-06-01 23:04:00.907041 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-01 23:04:00.907050 | orchestrator | Sunday 01 June 2025 23:01:36 +0000 (0:00:01.543) 0:00:42.474 *********** 2025-06-01 23:04:00.907058 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 23:04:00.907066 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 23:04:00.907075 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 23:04:00.907083 | orchestrator | 2025-06-01 23:04:00.907090 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-01 23:04:00.907102 | orchestrator | Sunday 01 June 2025 23:01:38 +0000 (0:00:01.192) 0:00:43.666 *********** 2025-06-01 23:04:00.907111 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:04:00.907118 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:04:00.907126 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:04:00.907134 | orchestrator | 2025-06-01 23:04:00.907142 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-01 23:04:00.907150 | orchestrator | Sunday 01 June 2025 23:01:39 +0000 (0:00:01.155) 0:00:44.822 *********** 2025-06-01 23:04:00.907157 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:00.907165 | orchestrator | 2025-06-01 23:04:00.907173 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-01 23:04:00.907181 | orchestrator | Sunday 01 June 2025 23:01:39 +0000 (0:00:00.144) 0:00:44.966 *********** 2025-06-01 23:04:00.907189 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:00.907197 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:00.907204 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:00.907212 | orchestrator | 2025-06-01 23:04:00.907220 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-01 23:04:00.907228 | orchestrator | Sunday 01 June 2025 23:01:39 +0000 (0:00:00.479) 0:00:45.446 *********** 2025-06-01 23:04:00.907236 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:04:00.907244 | orchestrator | 2025-06-01 23:04:00.907252 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-01 23:04:00.907260 | orchestrator | Sunday 01 June 2025 23:01:40 +0000 (0:00:00.832) 0:00:46.279 *********** 2025-06-01 23:04:00.907273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:04:00.907290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:04:00.907299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:04:00.907306 | orchestrator | 2025-06-01 23:04:00.907313 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-01 23:04:00.907320 | orchestrator | Sunday 01 June 2025 23:01:45 +0000 (0:00:04.342) 0:00:50.622 *********** 2025-06-01 23:04:00.907333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 23:04:00.907350 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:00.907357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 23:04:00.907365 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:00.907378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 23:04:00.907390 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:00.907397 | orchestrator | 2025-06-01 23:04:00.907404 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-01 23:04:00.907410 | orchestrator | Sunday 01 June 2025 23:01:48 +0000 (0:00:03.557) 0:00:54.179 *********** 2025-06-01 23:04:00.907421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 23:04:00.907429 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:00.907440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 23:04:00.907452 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:00.907463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 23:04:00.907471 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:00.907478 | orchestrator | 2025-06-01 23:04:00.907484 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-01 23:04:00.907491 | orchestrator | Sunday 01 June 2025 23:01:52 +0000 (0:00:03.497) 0:00:57.677 *********** 2025-06-01 23:04:00.907497 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:00.907504 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:00.907511 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:00.907517 | orchestrator | 2025-06-01 23:04:00.907524 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-01 23:04:00.907531 | orchestrator | Sunday 01 June 2025 23:01:55 +0000 (0:00:03.405) 0:01:01.082 *********** 2025-06-01 23:04:00.907543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:04:00.907564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:04:00.907572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:04:00.907585 | orchestrator | 2025-06-01 23:04:00.907591 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-01 23:04:00.907598 | orchestrator | Sunday 01 June 2025 23:02:00 +0000 (0:00:05.378) 0:01:06.460 *********** 2025-06-01 23:04:00.907605 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:00.907611 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:04:00.907618 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:04:00.907625 | orchestrator | 2025-06-01 23:04:00.907632 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-01 23:04:00.907806 | orchestrator | Sunday 01 June 2025 23:02:08 +0000 (0:00:07.868) 0:01:14.329 *********** 2025-06-01 23:04:00.907818 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:00.907825 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:00.907832 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:00.907838 | orchestrator | 2025-06-01 23:04:00.907845 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-01 23:04:00.907852 | orchestrator | Sunday 01 June 2025 23:02:13 +0000 (0:00:05.007) 0:01:19.337 *********** 2025-06-01 23:04:00.907858 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:00.907865 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:00.907872 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:00.907878 | orchestrator | 2025-06-01 23:04:00.907885 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-01 23:04:00.907892 | orchestrator | Sunday 01 June 2025 23:02:17 +0000 (0:00:04.014) 0:01:23.351 *********** 2025-06-01 23:04:00.907898 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:00.907905 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:00.907912 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:00.907918 | orchestrator | 2025-06-01 23:04:00.907925 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-01 23:04:00.907931 | orchestrator | Sunday 01 June 2025 23:02:21 +0000 (0:00:04.147) 0:01:27.499 *********** 2025-06-01 23:04:00.907938 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:00.907945 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:00.907951 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:00.907958 | orchestrator | 2025-06-01 23:04:00.907964 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-01 23:04:00.907971 | orchestrator | Sunday 01 June 2025 23:02:27 +0000 (0:00:05.640) 0:01:33.139 *********** 2025-06-01 23:04:00.907978 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:00.907985 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:00.907991 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:00.907998 | orchestrator | 2025-06-01 23:04:00.908004 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-01 23:04:00.908016 | orchestrator | Sunday 01 June 2025 23:02:28 +0000 (0:00:00.435) 0:01:33.575 *********** 2025-06-01 23:04:00.908023 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-01 23:04:00.908030 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:00.908037 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-01 23:04:00.908043 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:00.908050 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-01 23:04:00.908063 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:00.908070 | orchestrator | 2025-06-01 23:04:00.908077 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-01 23:04:00.908084 | orchestrator | Sunday 01 June 2025 23:02:33 +0000 (0:00:05.952) 0:01:39.527 *********** 2025-06-01 23:04:00.908091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:04:00.908106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:04:00.908118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:04:00.908131 | orchestrator | 2025-06-01 23:04:00.908138 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-01 23:04:00.908144 | orchestrator | Sunday 01 June 2025 23:02:43 +0000 (0:00:09.338) 0:01:48.865 *********** 2025-06-01 23:04:00.908151 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:00.908157 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:00.908164 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:00.908171 | orchestrator | 2025-06-01 23:04:00.908178 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-01 23:04:00.908184 | orchestrator | Sunday 01 June 2025 23:02:44 +0000 (0:00:00.906) 0:01:49.771 *********** 2025-06-01 23:04:00.908191 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:00.908198 | orchestrator | 2025-06-01 23:04:00.908205 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-01 23:04:00.908211 | orchestrator | Sunday 01 June 2025 23:02:46 +0000 (0:00:02.365) 0:01:52.137 *********** 2025-06-01 23:04:00.908218 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:00.908224 | orchestrator | 2025-06-01 23:04:00.908231 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-01 23:04:00.908238 | orchestrator | Sunday 01 June 2025 23:02:48 +0000 (0:00:02.116) 0:01:54.254 *********** 2025-06-01 23:04:00.908245 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:00.908251 | orchestrator | 2025-06-01 23:04:00.908258 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-01 23:04:00.908268 | orchestrator | Sunday 01 June 2025 23:02:50 +0000 (0:00:01.942) 0:01:56.196 *********** 2025-06-01 23:04:00.908275 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:00.908282 | orchestrator | 2025-06-01 23:04:00.908289 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-01 23:04:00.908295 | orchestrator | Sunday 01 June 2025 23:03:16 +0000 (0:00:26.299) 0:02:22.496 *********** 2025-06-01 23:04:00.908302 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:00.908309 | orchestrator | 2025-06-01 23:04:00.908315 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-01 23:04:00.908322 | orchestrator | Sunday 01 June 2025 23:03:21 +0000 (0:00:04.061) 0:02:26.557 *********** 2025-06-01 23:04:00.908329 | orchestrator | 2025-06-01 23:04:00.908335 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-01 23:04:00.908342 | orchestrator | Sunday 01 June 2025 23:03:21 +0000 (0:00:00.160) 0:02:26.718 *********** 2025-06-01 23:04:00.908354 | orchestrator | 2025-06-01 23:04:00.908360 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-01 23:04:00.908367 | orchestrator | Sunday 01 June 2025 23:03:21 +0000 (0:00:00.102) 0:02:26.820 *********** 2025-06-01 23:04:00.908374 | orchestrator | 2025-06-01 23:04:00.908380 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-01 23:04:00.908387 | orchestrator | Sunday 01 June 2025 23:03:21 +0000 (0:00:00.135) 0:02:26.955 *********** 2025-06-01 23:04:00.908394 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:00.908401 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:04:00.908407 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:04:00.908414 | orchestrator | 2025-06-01 23:04:00.908421 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:04:00.908429 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-01 23:04:00.908441 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-01 23:04:00.908450 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-01 23:04:00.908458 | orchestrator | 2025-06-01 23:04:00.908465 | orchestrator | 2025-06-01 23:04:00.908473 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:04:00.908481 | orchestrator | Sunday 01 June 2025 23:03:59 +0000 (0:00:38.071) 0:03:05.026 *********** 2025-06-01 23:04:00.908490 | orchestrator | =============================================================================== 2025-06-01 23:04:00.908497 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.07s 2025-06-01 23:04:00.908505 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.30s 2025-06-01 23:04:00.908513 | orchestrator | service-ks-register : glance | Creating services ------------------------ 9.99s 2025-06-01 23:04:00.908521 | orchestrator | glance : Check glance containers ---------------------------------------- 9.34s 2025-06-01 23:04:00.908529 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.87s 2025-06-01 23:04:00.908537 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.95s 2025-06-01 23:04:00.908545 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.64s 2025-06-01 23:04:00.908554 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.58s 2025-06-01 23:04:00.908562 | orchestrator | glance : Copying over config.json files for services -------------------- 5.38s 2025-06-01 23:04:00.908571 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.01s 2025-06-01 23:04:00.908579 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.39s 2025-06-01 23:04:00.908587 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.37s 2025-06-01 23:04:00.908595 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.34s 2025-06-01 23:04:00.908602 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.15s 2025-06-01 23:04:00.908611 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.11s 2025-06-01 23:04:00.908619 | orchestrator | glance : Disable log_bin_trust_function_creators function --------------- 4.06s 2025-06-01 23:04:00.908626 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.01s 2025-06-01 23:04:00.908634 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.56s 2025-06-01 23:04:00.908642 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.53s 2025-06-01 23:04:00.908650 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.50s 2025-06-01 23:04:00.908658 | orchestrator | 2025-06-01 23:04:00 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:04:00.908696 | orchestrator | 2025-06-01 23:04:00 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:00.909738 | orchestrator | 2025-06-01 23:04:00 | INFO  | Task 2447e620-9723-425c-bd09-8af67cb02d01 is in state STARTED 2025-06-01 23:04:00.910879 | orchestrator | 2025-06-01 23:04:00 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:00.910893 | orchestrator | 2025-06-01 23:04:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:03.949218 | orchestrator | 2025-06-01 23:04:03 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:04:03.949424 | orchestrator | 2025-06-01 23:04:03 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:03.950724 | orchestrator | 2025-06-01 23:04:03 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:03.951138 | orchestrator | 2025-06-01 23:04:03 | INFO  | Task 2447e620-9723-425c-bd09-8af67cb02d01 is in state SUCCESS 2025-06-01 23:04:03.951855 | orchestrator | 2025-06-01 23:04:03 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:03.951878 | orchestrator | 2025-06-01 23:04:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:06.997554 | orchestrator | 2025-06-01 23:04:06 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:04:07.004844 | orchestrator | 2025-06-01 23:04:07 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:07.005543 | orchestrator | 2025-06-01 23:04:07 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:07.007964 | orchestrator | 2025-06-01 23:04:07 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:07.008707 | orchestrator | 2025-06-01 23:04:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:10.079276 | orchestrator | 2025-06-01 23:04:10 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:04:10.079430 | orchestrator | 2025-06-01 23:04:10 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:10.079446 | orchestrator | 2025-06-01 23:04:10 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:10.079457 | orchestrator | 2025-06-01 23:04:10 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:10.079467 | orchestrator | 2025-06-01 23:04:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:13.130168 | orchestrator | 2025-06-01 23:04:13 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state STARTED 2025-06-01 23:04:13.132881 | orchestrator | 2025-06-01 23:04:13 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:13.134851 | orchestrator | 2025-06-01 23:04:13 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:13.136870 | orchestrator | 2025-06-01 23:04:13 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:13.137319 | orchestrator | 2025-06-01 23:04:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:16.182921 | orchestrator | 2025-06-01 23:04:16 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:16.187303 | orchestrator | 2025-06-01 23:04:16 | INFO  | Task 6c7eea91-6791-4628-bd86-185aa22fc3d5 is in state SUCCESS 2025-06-01 23:04:16.189565 | orchestrator | 2025-06-01 23:04:16.189604 | orchestrator | None 2025-06-01 23:04:16.189617 | orchestrator | 2025-06-01 23:04:16.189629 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:04:16.189669 | orchestrator | 2025-06-01 23:04:16.189701 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:04:16.189713 | orchestrator | Sunday 01 June 2025 23:00:46 +0000 (0:00:00.298) 0:00:00.298 *********** 2025-06-01 23:04:16.189724 | orchestrator | ok: [testbed-manager] 2025-06-01 23:04:16.189736 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:04:16.189747 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:04:16.189758 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:04:16.189769 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:04:16.189780 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:04:16.189832 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:04:16.189844 | orchestrator | 2025-06-01 23:04:16.189855 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:04:16.189879 | orchestrator | Sunday 01 June 2025 23:00:47 +0000 (0:00:00.894) 0:00:01.193 *********** 2025-06-01 23:04:16.189891 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-01 23:04:16.189903 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-01 23:04:16.189914 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-01 23:04:16.189925 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-01 23:04:16.189949 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-01 23:04:16.189961 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-01 23:04:16.189972 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-01 23:04:16.189982 | orchestrator | 2025-06-01 23:04:16.189993 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-01 23:04:16.190004 | orchestrator | 2025-06-01 23:04:16.190057 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-01 23:04:16.190072 | orchestrator | Sunday 01 June 2025 23:00:47 +0000 (0:00:00.696) 0:00:01.890 *********** 2025-06-01 23:04:16.190085 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:04:16.190098 | orchestrator | 2025-06-01 23:04:16.190109 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-01 23:04:16.190119 | orchestrator | Sunday 01 June 2025 23:00:49 +0000 (0:00:01.660) 0:00:03.550 *********** 2025-06-01 23:04:16.190133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190180 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 23:04:16.190206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190276 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190390 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190416 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190507 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 23:04:16.190521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190597 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190620 | orchestrator | 2025-06-01 23:04:16.190631 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-01 23:04:16.190642 | orchestrator | Sunday 01 June 2025 23:00:53 +0000 (0:00:03.689) 0:00:07.240 *********** 2025-06-01 23:04:16.190653 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:04:16.190664 | orchestrator | 2025-06-01 23:04:16.190713 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-01 23:04:16.190726 | orchestrator | Sunday 01 June 2025 23:00:54 +0000 (0:00:01.372) 0:00:08.612 *********** 2025-06-01 23:04:16.190737 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 23:04:16.190761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190839 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.190856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190924 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.190959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.190995 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 23:04:16.191014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.191026 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.191037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.191049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.191067 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.191083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.191095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.191112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.191124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.191135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.191146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.191166 | orchestrator | 2025-06-01 23:04:16.191177 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-01 23:04:16.191188 | orchestrator | Sunday 01 June 2025 23:01:00 +0000 (0:00:06.281) 0:00:14.894 *********** 2025-06-01 23:04:16.191200 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 23:04:16.191216 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.191228 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.191247 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 23:04:16.191259 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.191278 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:04:16.191289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.191301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.191312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.191328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.191340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.191351 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:16.191370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.191382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.191394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.191412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.191423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.191439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.191451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.191462 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:16.191473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.191492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.191504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.191522 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:16.191533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.191545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.191556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.191567 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:04:16.191583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.191594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.191612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.191624 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:04:16.191635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.191653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.191665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.191694 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:04:16.191706 | orchestrator | 2025-06-01 23:04:16.191717 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-01 23:04:16.191728 | orchestrator | Sunday 01 June 2025 23:01:02 +0000 (0:00:01.731) 0:00:16.625 *********** 2025-06-01 23:04:16.191745 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 23:04:16.191757 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.191768 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.191789 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 23:04:16.191839 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.191852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.191864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.191881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.192060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.192083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.192107 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:04:16.192118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.192130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.192141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.192152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.192164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.192175 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:16.192186 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:16.192202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.192214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.192239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.192251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.192262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:04:16.192273 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:16.192285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.192296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.192313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.192325 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:04:16.192336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.192603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.192619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.192631 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:04:16.192642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:04:16.192654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.192666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 23:04:16.193305 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:04:16.193421 | orchestrator | 2025-06-01 23:04:16.193438 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-01 23:04:16.193452 | orchestrator | Sunday 01 June 2025 23:01:04 +0000 (0:00:01.834) 0:00:18.460 *********** 2025-06-01 23:04:16.193489 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 23:04:16.193533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.193581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.193595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.193606 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.193617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.193628 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.193641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.193658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.193711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.193734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.193747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.193760 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.193772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.193783 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.193800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.193820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.193832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.193852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.193892 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 23:04:16.193908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.193920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.193944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.193956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.193976 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.193988 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.194000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.194165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.194179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.194190 | orchestrator | 2025-06-01 23:04:16.194202 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-01 23:04:16.194222 | orchestrator | Sunday 01 June 2025 23:01:09 +0000 (0:00:05.246) 0:00:23.706 *********** 2025-06-01 23:04:16.194234 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 23:04:16.194245 | orchestrator | 2025-06-01 23:04:16.194256 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-01 23:04:16.194267 | orchestrator | Sunday 01 June 2025 23:01:10 +0000 (0:00:00.926) 0:00:24.632 *********** 2025-06-01 23:04:16.194285 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094232, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194297 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094232, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194320 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094232, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194333 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094219, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194344 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094232, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194356 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094232, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.194381 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094219, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194399 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094232, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194411 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094232, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194430 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094183, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194442 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094219, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194454 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094219, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194465 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094183, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194484 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094219, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194543 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094219, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194556 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094185, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194577 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094183, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194589 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094219, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.194600 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094183, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194612 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094185, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194632 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094183, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194649 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094183, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194660 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094185, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194881 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094185, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194963 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094211, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194981 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094211, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.194994 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094211, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195029 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094211, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195057 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094185, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195069 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094190, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5410287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195098 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094185, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195111 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094190, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5410287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195122 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094190, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5410287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195142 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094183, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.195153 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094190, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5410287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195170 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094207, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195182 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094211, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195201 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094211, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195213 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094207, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195225 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094207, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195244 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094222, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195255 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094190, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5410287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195272 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094207, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195284 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094190, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5410287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195301 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094222, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195314 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094230, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195326 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094207, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195344 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094222, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195355 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094185, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.195372 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094222, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195384 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094222, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195395 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094246, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195413 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094207, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195426 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094230, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195446 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094230, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195459 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094230, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195476 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094222, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195491 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094230, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195505 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094225, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195522 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094246, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195539 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094246, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195551 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094211, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.195562 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094246, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195574 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094246, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195590 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094230, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195601 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094225, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195620 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094189, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5400288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195638 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094225, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195650 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094225, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195661 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094246, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195673 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094225, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195711 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094189, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5400288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195723 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094203, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195741 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094189, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5400288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195767 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094189, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5400288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195778 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094225, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195790 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094190, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5410287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.195801 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094189, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5400288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195817 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094203, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195829 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094180, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195852 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094203, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195864 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094189, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5400288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195876 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094203, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195887 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094203, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195899 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094180, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195915 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094214, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195926 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094180, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195951 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094203, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195963 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094180, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195975 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094180, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195986 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094245, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.195998 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094180, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196015 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094207, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.196026 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094214, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196050 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094214, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196062 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094214, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196073 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094214, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196085 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094214, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196096 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094196, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5420287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196112 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094245, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196130 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094245, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196149 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094245, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196161 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094245, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196172 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094233, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5470288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196185 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:16.196198 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094245, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196210 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094196, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5420287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196226 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094196, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5420287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196244 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094196, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5420287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196263 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094222, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.196274 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094196, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5420287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196286 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094233, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5470288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196297 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:04:16.196309 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094233, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5470288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196320 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:04:16.196331 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094233, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5470288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196342 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:16.196364 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094196, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5420287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196376 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094233, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5470288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196388 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:04:16.196405 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094233, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5470288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-01 23:04:16.196416 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:16.196428 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094230, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5460289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.196440 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094246, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.196451 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094225, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5450287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.196467 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094189, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5400288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.196485 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094203, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5430288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.196496 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094180, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5390286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.196514 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094214, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5440288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.196525 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094245, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5490289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.196537 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094196, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5420287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.196548 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094233, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5470288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 23:04:16.196559 | orchestrator | 2025-06-01 23:04:16.196572 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-01 23:04:16.196594 | orchestrator | Sunday 01 June 2025 23:01:36 +0000 (0:00:25.811) 0:00:50.444 *********** 2025-06-01 23:04:16.196605 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 23:04:16.196617 | orchestrator | 2025-06-01 23:04:16.196628 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-01 23:04:16.196639 | orchestrator | Sunday 01 June 2025 23:01:37 +0000 (0:00:00.759) 0:00:51.203 *********** 2025-06-01 23:04:16.196650 | orchestrator | [WARNING]: Skipped 2025-06-01 23:04:16.196662 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.196702 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-01 23:04:16.196720 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.196732 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-01 23:04:16.196743 | orchestrator | [WARNING]: Skipped 2025-06-01 23:04:16.196754 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.196765 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-01 23:04:16.196776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.196787 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-01 23:04:16.196798 | orchestrator | [WARNING]: Skipped 2025-06-01 23:04:16.196808 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.196819 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-01 23:04:16.196830 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.196840 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-01 23:04:16.196851 | orchestrator | [WARNING]: Skipped 2025-06-01 23:04:16.196862 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.196873 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-01 23:04:16.196884 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.196894 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-01 23:04:16.196905 | orchestrator | [WARNING]: Skipped 2025-06-01 23:04:16.196916 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.196933 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-01 23:04:16.196944 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.196955 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-01 23:04:16.196966 | orchestrator | [WARNING]: Skipped 2025-06-01 23:04:16.196977 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.196988 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-01 23:04:16.196999 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.197010 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-01 23:04:16.197021 | orchestrator | [WARNING]: Skipped 2025-06-01 23:04:16.197032 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.197043 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-01 23:04:16.197054 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-01 23:04:16.197064 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-01 23:04:16.197075 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:04:16.197087 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 23:04:16.197097 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-01 23:04:16.197108 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-01 23:04:16.197119 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 23:04:16.197137 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-01 23:04:16.197148 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-01 23:04:16.197159 | orchestrator | 2025-06-01 23:04:16.197170 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-01 23:04:16.197181 | orchestrator | Sunday 01 June 2025 23:01:39 +0000 (0:00:02.053) 0:00:53.256 *********** 2025-06-01 23:04:16.197192 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-01 23:04:16.197203 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:16.197214 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-01 23:04:16.197225 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:16.197236 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-01 23:04:16.197246 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:16.197257 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-01 23:04:16.197268 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:04:16.197279 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-01 23:04:16.197290 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:04:16.197300 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-01 23:04:16.197311 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:04:16.197322 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-01 23:04:16.197333 | orchestrator | 2025-06-01 23:04:16.197344 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-01 23:04:16.197355 | orchestrator | Sunday 01 June 2025 23:01:55 +0000 (0:00:16.684) 0:01:09.941 *********** 2025-06-01 23:04:16.197365 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-01 23:04:16.197376 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:16.197387 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-01 23:04:16.197398 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:16.197409 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-01 23:04:16.197419 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:16.197435 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-01 23:04:16.197446 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:04:16.197457 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-01 23:04:16.197468 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:04:16.197479 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-01 23:04:16.197490 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:04:16.197501 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-01 23:04:16.197512 | orchestrator | 2025-06-01 23:04:16.197523 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-01 23:04:16.197534 | orchestrator | Sunday 01 June 2025 23:02:00 +0000 (0:00:04.448) 0:01:14.390 *********** 2025-06-01 23:04:16.197545 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-01 23:04:16.197556 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-01 23:04:16.197567 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-01 23:04:16.197775 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-01 23:04:16.197795 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-01 23:04:16.197807 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:16.197818 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:16.197829 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:16.197840 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:04:16.197851 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-01 23:04:16.197862 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:04:16.197873 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-01 23:04:16.197884 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:04:16.197895 | orchestrator | 2025-06-01 23:04:16.197906 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-01 23:04:16.197917 | orchestrator | Sunday 01 June 2025 23:02:03 +0000 (0:00:02.943) 0:01:17.334 *********** 2025-06-01 23:04:16.197929 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 23:04:16.197939 | orchestrator | 2025-06-01 23:04:16.197950 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-01 23:04:16.197962 | orchestrator | Sunday 01 June 2025 23:02:04 +0000 (0:00:00.774) 0:01:18.109 *********** 2025-06-01 23:04:16.197972 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:04:16.197983 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:16.197994 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:16.198005 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:16.198046 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:04:16.198061 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:04:16.198072 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:04:16.198083 | orchestrator | 2025-06-01 23:04:16.198094 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-01 23:04:16.198105 | orchestrator | Sunday 01 June 2025 23:02:04 +0000 (0:00:00.738) 0:01:18.847 *********** 2025-06-01 23:04:16.198116 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:04:16.198127 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:04:16.198138 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:04:16.198148 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:04:16.198159 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:04:16.198170 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:16.198181 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:04:16.198192 | orchestrator | 2025-06-01 23:04:16.198203 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-01 23:04:16.198214 | orchestrator | Sunday 01 June 2025 23:02:07 +0000 (0:00:02.663) 0:01:21.511 *********** 2025-06-01 23:04:16.198225 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 23:04:16.198236 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:16.198247 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 23:04:16.198258 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:04:16.198269 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 23:04:16.198280 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:16.198291 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 23:04:16.198302 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:16.198313 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 23:04:16.198324 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:04:16.198344 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 23:04:16.198355 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:04:16.198366 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-01 23:04:16.198377 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:04:16.198388 | orchestrator | 2025-06-01 23:04:16.198406 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-01 23:04:16.198420 | orchestrator | Sunday 01 June 2025 23:02:09 +0000 (0:00:01.894) 0:01:23.405 *********** 2025-06-01 23:04:16.198433 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-01 23:04:16.198446 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:16.198459 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-01 23:04:16.198472 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:16.198485 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-01 23:04:16.198497 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:16.198510 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-01 23:04:16.198522 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:04:16.198535 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-01 23:04:16.198548 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:04:16.198561 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-01 23:04:16.198573 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:04:16.198593 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-01 23:04:16.198606 | orchestrator | 2025-06-01 23:04:16.198619 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-01 23:04:16.198632 | orchestrator | Sunday 01 June 2025 23:02:11 +0000 (0:00:02.558) 0:01:25.964 *********** 2025-06-01 23:04:16.198644 | orchestrator | [WARNING]: Skipped 2025-06-01 23:04:16.198657 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-01 23:04:16.198670 | orchestrator | due to this access issue: 2025-06-01 23:04:16.198702 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-01 23:04:16.198715 | orchestrator | not a directory 2025-06-01 23:04:16.198727 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 23:04:16.198740 | orchestrator | 2025-06-01 23:04:16.198753 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-01 23:04:16.198764 | orchestrator | Sunday 01 June 2025 23:02:13 +0000 (0:00:01.562) 0:01:27.526 *********** 2025-06-01 23:04:16.198775 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:04:16.198786 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:16.198797 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:16.198808 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:16.198820 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:04:16.198830 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:04:16.198841 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:04:16.198852 | orchestrator | 2025-06-01 23:04:16.198863 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-01 23:04:16.198875 | orchestrator | Sunday 01 June 2025 23:02:14 +0000 (0:00:00.806) 0:01:28.333 *********** 2025-06-01 23:04:16.198886 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:04:16.198897 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:04:16.198908 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:04:16.198919 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:04:16.198936 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:04:16.198947 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:04:16.198958 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:04:16.198969 | orchestrator | 2025-06-01 23:04:16.198980 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-01 23:04:16.198992 | orchestrator | Sunday 01 June 2025 23:02:15 +0000 (0:00:00.823) 0:01:29.157 *********** 2025-06-01 23:04:16.199004 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 23:04:16.199019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.199036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.199048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.199067 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.199079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.199090 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.199109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.199121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.199133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.199149 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-01 23:04:16.199161 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.199179 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.199191 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.199209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.199220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.199232 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.199252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.199271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.199288 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.199302 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 23:04:16.199322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.199334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.199346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.199362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-01 23:04:16.199374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.199391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.199403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.199425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:04:16.199437 | orchestrator | 2025-06-01 23:04:16.199448 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-01 23:04:16.199459 | orchestrator | Sunday 01 June 2025 23:02:19 +0000 (0:00:04.540) 0:01:33.697 *********** 2025-06-01 23:04:16.199470 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-01 23:04:16.199481 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:04:16.199492 | orchestrator | 2025-06-01 23:04:16.199503 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 23:04:16.199514 | orchestrator | Sunday 01 June 2025 23:02:21 +0000 (0:00:01.637) 0:01:35.335 *********** 2025-06-01 23:04:16.199525 | orchestrator | 2025-06-01 23:04:16.199536 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 23:04:16.199547 | orchestrator | Sunday 01 June 2025 23:02:21 +0000 (0:00:00.065) 0:01:35.400 *********** 2025-06-01 23:04:16.199558 | orchestrator | 2025-06-01 23:04:16.199569 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 23:04:16.199580 | orchestrator | Sunday 01 June 2025 23:02:21 +0000 (0:00:00.062) 0:01:35.462 *********** 2025-06-01 23:04:16.199591 | orchestrator | 2025-06-01 23:04:16.199602 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 23:04:16.199612 | orchestrator | Sunday 01 June 2025 23:02:21 +0000 (0:00:00.064) 0:01:35.527 *********** 2025-06-01 23:04:16.199623 | orchestrator | 2025-06-01 23:04:16.199634 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 23:04:16.199645 | orchestrator | Sunday 01 June 2025 23:02:21 +0000 (0:00:00.069) 0:01:35.596 *********** 2025-06-01 23:04:16.199656 | orchestrator | 2025-06-01 23:04:16.199667 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 23:04:16.199695 | orchestrator | Sunday 01 June 2025 23:02:21 +0000 (0:00:00.217) 0:01:35.814 *********** 2025-06-01 23:04:16.199706 | orchestrator | 2025-06-01 23:04:16.199717 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-01 23:04:16.199728 | orchestrator | Sunday 01 June 2025 23:02:21 +0000 (0:00:00.062) 0:01:35.876 *********** 2025-06-01 23:04:16.199739 | orchestrator | 2025-06-01 23:04:16.199750 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-01 23:04:16.199761 | orchestrator | Sunday 01 June 2025 23:02:21 +0000 (0:00:00.080) 0:01:35.957 *********** 2025-06-01 23:04:16.199777 | orchestrator | changed: [testbed-manager] 2025-06-01 23:04:16.199788 | orchestrator | 2025-06-01 23:04:16.199799 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-01 23:04:16.199810 | orchestrator | Sunday 01 June 2025 23:02:37 +0000 (0:00:15.858) 0:01:51.816 *********** 2025-06-01 23:04:16.199821 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:04:16.199832 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:16.199843 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:04:16.199853 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:04:16.199864 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:04:16.199881 | orchestrator | changed: [testbed-manager] 2025-06-01 23:04:16.199892 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:04:16.199903 | orchestrator | 2025-06-01 23:04:16.199914 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-01 23:04:16.199925 | orchestrator | Sunday 01 June 2025 23:02:54 +0000 (0:00:16.671) 0:02:08.487 *********** 2025-06-01 23:04:16.199936 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:16.199947 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:04:16.199958 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:04:16.199969 | orchestrator | 2025-06-01 23:04:16.199979 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-01 23:04:16.199990 | orchestrator | Sunday 01 June 2025 23:03:05 +0000 (0:00:10.857) 0:02:19.344 *********** 2025-06-01 23:04:16.200001 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:04:16.200012 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:04:16.200023 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:16.200034 | orchestrator | 2025-06-01 23:04:16.200045 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-01 23:04:16.200056 | orchestrator | Sunday 01 June 2025 23:03:15 +0000 (0:00:10.190) 0:02:29.535 *********** 2025-06-01 23:04:16.200067 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:04:16.200083 | orchestrator | changed: [testbed-manager] 2025-06-01 23:04:16.200095 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:04:16.200106 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:16.200117 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:04:16.200127 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:04:16.200138 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:04:16.200149 | orchestrator | 2025-06-01 23:04:16.200160 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-01 23:04:16.200171 | orchestrator | Sunday 01 June 2025 23:03:33 +0000 (0:00:17.531) 0:02:47.066 *********** 2025-06-01 23:04:16.200182 | orchestrator | changed: [testbed-manager] 2025-06-01 23:04:16.200193 | orchestrator | 2025-06-01 23:04:16.200204 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-01 23:04:16.200215 | orchestrator | Sunday 01 June 2025 23:03:40 +0000 (0:00:07.851) 0:02:54.917 *********** 2025-06-01 23:04:16.200226 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:04:16.200237 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:04:16.200248 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:04:16.200258 | orchestrator | 2025-06-01 23:04:16.200269 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-01 23:04:16.200280 | orchestrator | Sunday 01 June 2025 23:03:52 +0000 (0:00:11.192) 0:03:06.110 *********** 2025-06-01 23:04:16.200291 | orchestrator | changed: [testbed-manager] 2025-06-01 23:04:16.200302 | orchestrator | 2025-06-01 23:04:16.200313 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-01 23:04:16.200324 | orchestrator | Sunday 01 June 2025 23:04:02 +0000 (0:00:09.997) 0:03:16.108 *********** 2025-06-01 23:04:16.200335 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:04:16.200346 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:04:16.200357 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:04:16.200368 | orchestrator | 2025-06-01 23:04:16.200379 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:04:16.200390 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-01 23:04:16.200401 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-01 23:04:16.200412 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-01 23:04:16.200423 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-01 23:04:16.200440 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-01 23:04:16.200452 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-01 23:04:16.200463 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-01 23:04:16.200473 | orchestrator | 2025-06-01 23:04:16.200484 | orchestrator | 2025-06-01 23:04:16.200495 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:04:16.200506 | orchestrator | Sunday 01 June 2025 23:04:13 +0000 (0:00:11.450) 0:03:27.559 *********** 2025-06-01 23:04:16.200517 | orchestrator | =============================================================================== 2025-06-01 23:04:16.200528 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.81s 2025-06-01 23:04:16.200539 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.53s 2025-06-01 23:04:16.200550 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.68s 2025-06-01 23:04:16.200565 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.67s 2025-06-01 23:04:16.200576 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.86s 2025-06-01 23:04:16.200587 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.45s 2025-06-01 23:04:16.200598 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.19s 2025-06-01 23:04:16.200609 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.86s 2025-06-01 23:04:16.200620 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.19s 2025-06-01 23:04:16.200630 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.00s 2025-06-01 23:04:16.200641 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.85s 2025-06-01 23:04:16.200652 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.28s 2025-06-01 23:04:16.200663 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.25s 2025-06-01 23:04:16.200689 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.54s 2025-06-01 23:04:16.200700 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.45s 2025-06-01 23:04:16.200711 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.69s 2025-06-01 23:04:16.200722 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.94s 2025-06-01 23:04:16.200732 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.66s 2025-06-01 23:04:16.200749 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.56s 2025-06-01 23:04:16.200760 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.05s 2025-06-01 23:04:16.200771 | orchestrator | 2025-06-01 23:04:16 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:16.200783 | orchestrator | 2025-06-01 23:04:16 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:16.200794 | orchestrator | 2025-06-01 23:04:16 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:16.200805 | orchestrator | 2025-06-01 23:04:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:19.259915 | orchestrator | 2025-06-01 23:04:19 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:19.262967 | orchestrator | 2025-06-01 23:04:19 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:19.266692 | orchestrator | 2025-06-01 23:04:19 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:19.268781 | orchestrator | 2025-06-01 23:04:19 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:19.269432 | orchestrator | 2025-06-01 23:04:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:22.317538 | orchestrator | 2025-06-01 23:04:22 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:22.318765 | orchestrator | 2025-06-01 23:04:22 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:22.320397 | orchestrator | 2025-06-01 23:04:22 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:22.322110 | orchestrator | 2025-06-01 23:04:22 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:22.322136 | orchestrator | 2025-06-01 23:04:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:25.368932 | orchestrator | 2025-06-01 23:04:25 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:25.371513 | orchestrator | 2025-06-01 23:04:25 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:25.374947 | orchestrator | 2025-06-01 23:04:25 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:25.378239 | orchestrator | 2025-06-01 23:04:25 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:25.378269 | orchestrator | 2025-06-01 23:04:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:28.426739 | orchestrator | 2025-06-01 23:04:28 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:28.428419 | orchestrator | 2025-06-01 23:04:28 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:28.431098 | orchestrator | 2025-06-01 23:04:28 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:28.432671 | orchestrator | 2025-06-01 23:04:28 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:28.433046 | orchestrator | 2025-06-01 23:04:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:31.484439 | orchestrator | 2025-06-01 23:04:31 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:31.486794 | orchestrator | 2025-06-01 23:04:31 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:31.488516 | orchestrator | 2025-06-01 23:04:31 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:31.490975 | orchestrator | 2025-06-01 23:04:31 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:31.490997 | orchestrator | 2025-06-01 23:04:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:34.546352 | orchestrator | 2025-06-01 23:04:34 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:34.547304 | orchestrator | 2025-06-01 23:04:34 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:34.550147 | orchestrator | 2025-06-01 23:04:34 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:34.552407 | orchestrator | 2025-06-01 23:04:34 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:34.552543 | orchestrator | 2025-06-01 23:04:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:37.602501 | orchestrator | 2025-06-01 23:04:37 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:37.606149 | orchestrator | 2025-06-01 23:04:37 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:37.609159 | orchestrator | 2025-06-01 23:04:37 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:37.611021 | orchestrator | 2025-06-01 23:04:37 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:37.611110 | orchestrator | 2025-06-01 23:04:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:40.656669 | orchestrator | 2025-06-01 23:04:40 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:40.657599 | orchestrator | 2025-06-01 23:04:40 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:40.658186 | orchestrator | 2025-06-01 23:04:40 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:40.659087 | orchestrator | 2025-06-01 23:04:40 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:40.659107 | orchestrator | 2025-06-01 23:04:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:43.706400 | orchestrator | 2025-06-01 23:04:43 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:43.706564 | orchestrator | 2025-06-01 23:04:43 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:43.707361 | orchestrator | 2025-06-01 23:04:43 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:43.709782 | orchestrator | 2025-06-01 23:04:43 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:43.709804 | orchestrator | 2025-06-01 23:04:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:46.742424 | orchestrator | 2025-06-01 23:04:46 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:46.742657 | orchestrator | 2025-06-01 23:04:46 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:46.746795 | orchestrator | 2025-06-01 23:04:46 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:46.747644 | orchestrator | 2025-06-01 23:04:46 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:46.747679 | orchestrator | 2025-06-01 23:04:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:49.796049 | orchestrator | 2025-06-01 23:04:49 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:49.798577 | orchestrator | 2025-06-01 23:04:49 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:49.801220 | orchestrator | 2025-06-01 23:04:49 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:49.803111 | orchestrator | 2025-06-01 23:04:49 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:49.803143 | orchestrator | 2025-06-01 23:04:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:52.841361 | orchestrator | 2025-06-01 23:04:52 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:52.841562 | orchestrator | 2025-06-01 23:04:52 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:52.841594 | orchestrator | 2025-06-01 23:04:52 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:52.842308 | orchestrator | 2025-06-01 23:04:52 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:52.842456 | orchestrator | 2025-06-01 23:04:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:55.889649 | orchestrator | 2025-06-01 23:04:55 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:55.893203 | orchestrator | 2025-06-01 23:04:55 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:55.894745 | orchestrator | 2025-06-01 23:04:55 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:55.897145 | orchestrator | 2025-06-01 23:04:55 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:55.897819 | orchestrator | 2025-06-01 23:04:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:04:58.932272 | orchestrator | 2025-06-01 23:04:58 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:04:58.932377 | orchestrator | 2025-06-01 23:04:58 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:04:58.933437 | orchestrator | 2025-06-01 23:04:58 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:04:58.934337 | orchestrator | 2025-06-01 23:04:58 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:04:58.934376 | orchestrator | 2025-06-01 23:04:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:01.971290 | orchestrator | 2025-06-01 23:05:01 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:01.971789 | orchestrator | 2025-06-01 23:05:01 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:01.972889 | orchestrator | 2025-06-01 23:05:01 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:01.973841 | orchestrator | 2025-06-01 23:05:01 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:05:01.973868 | orchestrator | 2025-06-01 23:05:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:05.009488 | orchestrator | 2025-06-01 23:05:05 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:05.010294 | orchestrator | 2025-06-01 23:05:05 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:05.013133 | orchestrator | 2025-06-01 23:05:05 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:05.014731 | orchestrator | 2025-06-01 23:05:05 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:05:05.014786 | orchestrator | 2025-06-01 23:05:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:08.051879 | orchestrator | 2025-06-01 23:05:08 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:08.052359 | orchestrator | 2025-06-01 23:05:08 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:08.053337 | orchestrator | 2025-06-01 23:05:08 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:08.054288 | orchestrator | 2025-06-01 23:05:08 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:05:08.054318 | orchestrator | 2025-06-01 23:05:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:11.087042 | orchestrator | 2025-06-01 23:05:11 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:11.089965 | orchestrator | 2025-06-01 23:05:11 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:11.090790 | orchestrator | 2025-06-01 23:05:11 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:11.091754 | orchestrator | 2025-06-01 23:05:11 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:05:11.091785 | orchestrator | 2025-06-01 23:05:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:14.123149 | orchestrator | 2025-06-01 23:05:14 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:14.123295 | orchestrator | 2025-06-01 23:05:14 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:14.125996 | orchestrator | 2025-06-01 23:05:14 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:14.126495 | orchestrator | 2025-06-01 23:05:14 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:05:14.126519 | orchestrator | 2025-06-01 23:05:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:17.153454 | orchestrator | 2025-06-01 23:05:17 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:17.154560 | orchestrator | 2025-06-01 23:05:17 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:17.156068 | orchestrator | 2025-06-01 23:05:17 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:17.157442 | orchestrator | 2025-06-01 23:05:17 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:05:17.157464 | orchestrator | 2025-06-01 23:05:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:20.202352 | orchestrator | 2025-06-01 23:05:20 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:20.202598 | orchestrator | 2025-06-01 23:05:20 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:20.202636 | orchestrator | 2025-06-01 23:05:20 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:20.203784 | orchestrator | 2025-06-01 23:05:20 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:05:20.203815 | orchestrator | 2025-06-01 23:05:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:23.247149 | orchestrator | 2025-06-01 23:05:23 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:23.248298 | orchestrator | 2025-06-01 23:05:23 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:23.248824 | orchestrator | 2025-06-01 23:05:23 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:23.249472 | orchestrator | 2025-06-01 23:05:23 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:05:23.249494 | orchestrator | 2025-06-01 23:05:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:26.284827 | orchestrator | 2025-06-01 23:05:26 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:26.288036 | orchestrator | 2025-06-01 23:05:26 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:26.290014 | orchestrator | 2025-06-01 23:05:26 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:26.292647 | orchestrator | 2025-06-01 23:05:26 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state STARTED 2025-06-01 23:05:26.292888 | orchestrator | 2025-06-01 23:05:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:29.322256 | orchestrator | 2025-06-01 23:05:29 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:29.322913 | orchestrator | 2025-06-01 23:05:29 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:29.324643 | orchestrator | 2025-06-01 23:05:29 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:29.327077 | orchestrator | 2025-06-01 23:05:29 | INFO  | Task 17729d11-89e6-4048-a78c-1fc3731950e7 is in state SUCCESS 2025-06-01 23:05:29.330852 | orchestrator | 2025-06-01 23:05:29.330888 | orchestrator | 2025-06-01 23:05:29.330901 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:05:29.330913 | orchestrator | 2025-06-01 23:05:29.330924 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:05:29.330935 | orchestrator | Sunday 01 June 2025 23:01:25 +0000 (0:00:00.239) 0:00:00.239 *********** 2025-06-01 23:05:29.330947 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:05:29.330960 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:05:29.330971 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:05:29.330982 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:05:29.330993 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:05:29.331004 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:05:29.331015 | orchestrator | 2025-06-01 23:05:29.331027 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:05:29.331038 | orchestrator | Sunday 01 June 2025 23:01:26 +0000 (0:00:00.701) 0:00:00.941 *********** 2025-06-01 23:05:29.331088 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-01 23:05:29.331103 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-01 23:05:29.331114 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-01 23:05:29.331124 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-01 23:05:29.331154 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-01 23:05:29.331166 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-01 23:05:29.331177 | orchestrator | 2025-06-01 23:05:29.331188 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-01 23:05:29.331278 | orchestrator | 2025-06-01 23:05:29.331292 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-01 23:05:29.331303 | orchestrator | Sunday 01 June 2025 23:01:27 +0000 (0:00:01.049) 0:00:01.990 *********** 2025-06-01 23:05:29.331315 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:05:29.331328 | orchestrator | 2025-06-01 23:05:29.331339 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-01 23:05:29.331349 | orchestrator | Sunday 01 June 2025 23:01:29 +0000 (0:00:01.581) 0:00:03.571 *********** 2025-06-01 23:05:29.331362 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-01 23:05:29.331372 | orchestrator | 2025-06-01 23:05:29.331383 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-01 23:05:29.331394 | orchestrator | Sunday 01 June 2025 23:01:32 +0000 (0:00:03.091) 0:00:06.663 *********** 2025-06-01 23:05:29.331405 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-01 23:05:29.331416 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-01 23:05:29.331428 | orchestrator | 2025-06-01 23:05:29.331438 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-01 23:05:29.331449 | orchestrator | Sunday 01 June 2025 23:01:38 +0000 (0:00:05.752) 0:00:12.415 *********** 2025-06-01 23:05:29.331460 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 23:05:29.331471 | orchestrator | 2025-06-01 23:05:29.331484 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-01 23:05:29.331494 | orchestrator | Sunday 01 June 2025 23:01:40 +0000 (0:00:02.870) 0:00:15.286 *********** 2025-06-01 23:05:29.331520 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 23:05:29.331531 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-01 23:05:29.331543 | orchestrator | 2025-06-01 23:05:29.331719 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-01 23:05:29.331733 | orchestrator | Sunday 01 June 2025 23:01:44 +0000 (0:00:03.711) 0:00:18.997 *********** 2025-06-01 23:05:29.331744 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 23:05:29.331755 | orchestrator | 2025-06-01 23:05:29.331766 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-01 23:05:29.331796 | orchestrator | Sunday 01 June 2025 23:01:48 +0000 (0:00:03.583) 0:00:22.581 *********** 2025-06-01 23:05:29.331808 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-01 23:05:29.331819 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-01 23:05:29.331829 | orchestrator | 2025-06-01 23:05:29.331840 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-01 23:05:29.331851 | orchestrator | Sunday 01 June 2025 23:01:55 +0000 (0:00:07.519) 0:00:30.100 *********** 2025-06-01 23:05:29.331865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.331902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.331916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.331928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.331949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.331963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.331982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.331999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.332011 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.332030 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.332041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.332059 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.332071 | orchestrator | 2025-06-01 23:05:29.332082 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-01 23:05:29.332094 | orchestrator | Sunday 01 June 2025 23:01:59 +0000 (0:00:03.507) 0:00:33.608 *********** 2025-06-01 23:05:29.332105 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:05:29.332116 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:05:29.332127 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:05:29.332138 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:05:29.332149 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:05:29.332160 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:05:29.332170 | orchestrator | 2025-06-01 23:05:29.332181 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-01 23:05:29.332192 | orchestrator | Sunday 01 June 2025 23:01:59 +0000 (0:00:00.583) 0:00:34.192 *********** 2025-06-01 23:05:29.332203 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:05:29.332214 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:05:29.332224 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:05:29.332240 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:05:29.332252 | orchestrator | 2025-06-01 23:05:29.332262 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-01 23:05:29.332280 | orchestrator | Sunday 01 June 2025 23:02:00 +0000 (0:00:01.087) 0:00:35.279 *********** 2025-06-01 23:05:29.332291 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-01 23:05:29.332302 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-01 23:05:29.332313 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-01 23:05:29.332324 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-01 23:05:29.332334 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-01 23:05:29.332345 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-01 23:05:29.332356 | orchestrator | 2025-06-01 23:05:29.332367 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-01 23:05:29.332377 | orchestrator | Sunday 01 June 2025 23:02:03 +0000 (0:00:02.929) 0:00:38.209 *********** 2025-06-01 23:05:29.332389 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-01 23:05:29.332404 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-01 23:05:29.332422 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-01 23:05:29.332434 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-01 23:05:29.332457 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-01 23:05:29.332469 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-01 23:05:29.332481 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-01 23:05:29.332499 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-01 23:05:29.332516 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-01 23:05:29.332535 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-01 23:05:29.332549 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-01 23:05:29.332560 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-01 23:05:29.332571 | orchestrator | 2025-06-01 23:05:29.332583 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-01 23:05:29.332594 | orchestrator | Sunday 01 June 2025 23:02:08 +0000 (0:00:04.463) 0:00:42.672 *********** 2025-06-01 23:05:29.332605 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 23:05:29.332617 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 23:05:29.332628 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-01 23:05:29.332639 | orchestrator | 2025-06-01 23:05:29.332650 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-01 23:05:29.332661 | orchestrator | Sunday 01 June 2025 23:02:10 +0000 (0:00:02.146) 0:00:44.818 *********** 2025-06-01 23:05:29.332678 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-01 23:05:29.332705 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-01 23:05:29.332730 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-01 23:05:29.332749 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 23:05:29.332767 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 23:05:29.332792 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 23:05:29.332814 | orchestrator | 2025-06-01 23:05:29.332832 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-01 23:05:29.332849 | orchestrator | Sunday 01 June 2025 23:02:14 +0000 (0:00:04.060) 0:00:48.879 *********** 2025-06-01 23:05:29.332866 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-01 23:05:29.332882 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-01 23:05:29.332900 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-01 23:05:29.332917 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-01 23:05:29.332953 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-01 23:05:29.332972 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-01 23:05:29.332991 | orchestrator | 2025-06-01 23:05:29.333009 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-01 23:05:29.333029 | orchestrator | Sunday 01 June 2025 23:02:15 +0000 (0:00:01.169) 0:00:50.049 *********** 2025-06-01 23:05:29.333049 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:05:29.333062 | orchestrator | 2025-06-01 23:05:29.333073 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-01 23:05:29.333083 | orchestrator | Sunday 01 June 2025 23:02:16 +0000 (0:00:00.262) 0:00:50.311 *********** 2025-06-01 23:05:29.333094 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:05:29.333105 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:05:29.333116 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:05:29.333127 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:05:29.333137 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:05:29.333148 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:05:29.333159 | orchestrator | 2025-06-01 23:05:29.333169 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-01 23:05:29.333180 | orchestrator | Sunday 01 June 2025 23:02:17 +0000 (0:00:01.648) 0:00:51.959 *********** 2025-06-01 23:05:29.333193 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:05:29.333205 | orchestrator | 2025-06-01 23:05:29.333216 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-01 23:05:29.333227 | orchestrator | Sunday 01 June 2025 23:02:19 +0000 (0:00:01.506) 0:00:53.466 *********** 2025-06-01 23:05:29.333239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.333251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.333285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.333303 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.333316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.333327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.333339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.333984 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.334079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.334110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.334135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.334157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.334189 | orchestrator | 2025-06-01 23:05:29.334202 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-01 23:05:29.334213 | orchestrator | Sunday 01 June 2025 23:02:22 +0000 (0:00:03.551) 0:00:57.017 *********** 2025-06-01 23:05:29.334234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 23:05:29.334246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 23:05:29.334276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334288 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:05:29.334300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 23:05:29.334318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334330 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:05:29.334341 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:05:29.334361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334390 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:05:29.334401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334436 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:05:29.334447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334478 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:05:29.334489 | orchestrator | 2025-06-01 23:05:29.334500 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-01 23:05:29.334511 | orchestrator | Sunday 01 June 2025 23:02:24 +0000 (0:00:02.068) 0:00:59.086 *********** 2025-06-01 23:05:29.334528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 23:05:29.334540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334559 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:05:29.334570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 23:05:29.334582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 23:05:29.334617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334629 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:05:29.334640 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:05:29.334651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334681 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:05:29.334722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334756 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:05:29.334773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.334813 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:05:29.334824 | orchestrator | 2025-06-01 23:05:29.334836 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-01 23:05:29.334847 | orchestrator | Sunday 01 June 2025 23:02:27 +0000 (0:00:02.929) 0:01:02.015 *********** 2025-06-01 23:05:29.334859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.334871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.334889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.334906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.334926 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.334937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.334949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.334966 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.334983 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.334995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.335013 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.335025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.335036 | orchestrator | 2025-06-01 23:05:29.335047 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-01 23:05:29.335058 | orchestrator | Sunday 01 June 2025 23:02:31 +0000 (0:00:03.992) 0:01:06.007 *********** 2025-06-01 23:05:29.335069 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-01 23:05:29.335080 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:05:29.335091 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-01 23:05:29.335102 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:05:29.335112 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-01 23:05:29.335123 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:05:29.335134 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-01 23:05:29.335153 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-01 23:05:29.335178 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-01 23:05:29.335198 | orchestrator | 2025-06-01 23:05:29.335216 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-01 23:05:29.335234 | orchestrator | Sunday 01 June 2025 23:02:34 +0000 (0:00:02.793) 0:01:08.801 *********** 2025-06-01 23:05:29.335261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.335292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.335312 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.335333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.335364 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.335392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.335423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.335443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.335464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.335483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.335511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.335536 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.335548 | orchestrator | 2025-06-01 23:05:29.335560 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-01 23:05:29.335571 | orchestrator | Sunday 01 June 2025 23:02:47 +0000 (0:00:13.182) 0:01:21.983 *********** 2025-06-01 23:05:29.335582 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:05:29.335593 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:05:29.335603 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:05:29.335614 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:05:29.335625 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:05:29.335636 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:05:29.335646 | orchestrator | 2025-06-01 23:05:29.335657 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-01 23:05:29.335668 | orchestrator | Sunday 01 June 2025 23:02:49 +0000 (0:00:01.906) 0:01:23.889 *********** 2025-06-01 23:05:29.335679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 23:05:29.335714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.335726 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:05:29.335744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 23:05:29.335771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 23:05:29.335783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.335795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.335806 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:05:29.335817 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:05:29.335828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.335839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.335858 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:05:29.335876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.335894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.335906 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:05:29.335917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.335929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:05:29.335940 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:05:29.335951 | orchestrator | 2025-06-01 23:05:29.335962 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-01 23:05:29.335974 | orchestrator | Sunday 01 June 2025 23:02:50 +0000 (0:00:01.068) 0:01:24.958 *********** 2025-06-01 23:05:29.335985 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:05:29.335996 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:05:29.336006 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:05:29.336017 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:05:29.336028 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:05:29.336049 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:05:29.336060 | orchestrator | 2025-06-01 23:05:29.336071 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-01 23:05:29.336082 | orchestrator | Sunday 01 June 2025 23:02:51 +0000 (0:00:00.818) 0:01:25.777 *********** 2025-06-01 23:05:29.336100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.336117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.336130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:05:29.336141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.336153 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.336178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.336195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.336207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.336218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.336230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.336248 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.336267 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-01 23:05:29.336278 | orchestrator | 2025-06-01 23:05:29.336290 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-01 23:05:29.336301 | orchestrator | Sunday 01 June 2025 23:02:53 +0000 (0:00:02.388) 0:01:28.166 *********** 2025-06-01 23:05:29.336317 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:05:29.336329 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:05:29.336340 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:05:29.336350 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:05:29.336361 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:05:29.336372 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:05:29.336383 | orchestrator | 2025-06-01 23:05:29.336394 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-01 23:05:29.336405 | orchestrator | Sunday 01 June 2025 23:02:54 +0000 (0:00:00.875) 0:01:29.042 *********** 2025-06-01 23:05:29.336416 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:05:29.336427 | orchestrator | 2025-06-01 23:05:29.336438 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-01 23:05:29.336449 | orchestrator | Sunday 01 June 2025 23:02:56 +0000 (0:00:01.964) 0:01:31.007 *********** 2025-06-01 23:05:29.336460 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:05:29.336471 | orchestrator | 2025-06-01 23:05:29.336481 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-01 23:05:29.336492 | orchestrator | Sunday 01 June 2025 23:02:58 +0000 (0:00:02.063) 0:01:33.070 *********** 2025-06-01 23:05:29.336503 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:05:29.336514 | orchestrator | 2025-06-01 23:05:29.336525 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-01 23:05:29.336535 | orchestrator | Sunday 01 June 2025 23:03:14 +0000 (0:00:16.198) 0:01:49.269 *********** 2025-06-01 23:05:29.336546 | orchestrator | 2025-06-01 23:05:29.336557 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-01 23:05:29.336568 | orchestrator | Sunday 01 June 2025 23:03:15 +0000 (0:00:00.076) 0:01:49.346 *********** 2025-06-01 23:05:29.336579 | orchestrator | 2025-06-01 23:05:29.336590 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-01 23:05:29.336601 | orchestrator | Sunday 01 June 2025 23:03:15 +0000 (0:00:00.070) 0:01:49.417 *********** 2025-06-01 23:05:29.336619 | orchestrator | 2025-06-01 23:05:29.336630 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-01 23:05:29.336640 | orchestrator | Sunday 01 June 2025 23:03:15 +0000 (0:00:00.073) 0:01:49.490 *********** 2025-06-01 23:05:29.336651 | orchestrator | 2025-06-01 23:05:29.336662 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-01 23:05:29.336673 | orchestrator | Sunday 01 June 2025 23:03:15 +0000 (0:00:00.077) 0:01:49.567 *********** 2025-06-01 23:05:29.336683 | orchestrator | 2025-06-01 23:05:29.336712 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-01 23:05:29.336723 | orchestrator | Sunday 01 June 2025 23:03:15 +0000 (0:00:00.069) 0:01:49.637 *********** 2025-06-01 23:05:29.336734 | orchestrator | 2025-06-01 23:05:29.336745 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-01 23:05:29.336756 | orchestrator | Sunday 01 June 2025 23:03:15 +0000 (0:00:00.069) 0:01:49.706 *********** 2025-06-01 23:05:29.336766 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:05:29.336777 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:05:29.336788 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:05:29.336799 | orchestrator | 2025-06-01 23:05:29.336810 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-01 23:05:29.336820 | orchestrator | Sunday 01 June 2025 23:03:47 +0000 (0:00:32.012) 0:02:21.718 *********** 2025-06-01 23:05:29.336831 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:05:29.336842 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:05:29.336852 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:05:29.336863 | orchestrator | 2025-06-01 23:05:29.336874 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-01 23:05:29.336885 | orchestrator | Sunday 01 June 2025 23:03:57 +0000 (0:00:10.088) 0:02:31.807 *********** 2025-06-01 23:05:29.336896 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:05:29.336906 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:05:29.336917 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:05:29.336928 | orchestrator | 2025-06-01 23:05:29.336938 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-01 23:05:29.336949 | orchestrator | Sunday 01 June 2025 23:05:18 +0000 (0:01:21.028) 0:03:52.835 *********** 2025-06-01 23:05:29.336960 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:05:29.336971 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:05:29.336987 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:05:29.336998 | orchestrator | 2025-06-01 23:05:29.337009 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-01 23:05:29.337020 | orchestrator | Sunday 01 June 2025 23:05:28 +0000 (0:00:09.580) 0:04:02.416 *********** 2025-06-01 23:05:29.337031 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:05:29.337041 | orchestrator | 2025-06-01 23:05:29.337052 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:05:29.337069 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-01 23:05:29.337080 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-01 23:05:29.337092 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-01 23:05:29.337103 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-01 23:05:29.337114 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-01 23:05:29.337130 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-01 23:05:29.337148 | orchestrator | 2025-06-01 23:05:29.337159 | orchestrator | 2025-06-01 23:05:29.337170 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:05:29.337181 | orchestrator | Sunday 01 June 2025 23:05:28 +0000 (0:00:00.772) 0:04:03.189 *********** 2025-06-01 23:05:29.337192 | orchestrator | =============================================================================== 2025-06-01 23:05:29.337203 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 81.03s 2025-06-01 23:05:29.337213 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 32.01s 2025-06-01 23:05:29.337224 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 16.20s 2025-06-01 23:05:29.337235 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.18s 2025-06-01 23:05:29.337246 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.09s 2025-06-01 23:05:29.337256 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 9.58s 2025-06-01 23:05:29.337267 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.52s 2025-06-01 23:05:29.337278 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.75s 2025-06-01 23:05:29.337288 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.46s 2025-06-01 23:05:29.337299 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.06s 2025-06-01 23:05:29.337310 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.99s 2025-06-01 23:05:29.337321 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.71s 2025-06-01 23:05:29.337331 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.58s 2025-06-01 23:05:29.337342 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.55s 2025-06-01 23:05:29.337353 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.51s 2025-06-01 23:05:29.337363 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.09s 2025-06-01 23:05:29.337374 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.93s 2025-06-01 23:05:29.337385 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.93s 2025-06-01 23:05:29.337396 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.87s 2025-06-01 23:05:29.337406 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.79s 2025-06-01 23:05:29.337417 | orchestrator | 2025-06-01 23:05:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:32.366272 | orchestrator | 2025-06-01 23:05:32 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:05:32.368553 | orchestrator | 2025-06-01 23:05:32 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:32.370448 | orchestrator | 2025-06-01 23:05:32 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:32.372904 | orchestrator | 2025-06-01 23:05:32 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:32.372981 | orchestrator | 2025-06-01 23:05:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:35.416180 | orchestrator | 2025-06-01 23:05:35 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:05:35.416318 | orchestrator | 2025-06-01 23:05:35 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:35.416985 | orchestrator | 2025-06-01 23:05:35 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:35.417122 | orchestrator | 2025-06-01 23:05:35 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:35.418534 | orchestrator | 2025-06-01 23:05:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:38.463475 | orchestrator | 2025-06-01 23:05:38 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:05:38.465245 | orchestrator | 2025-06-01 23:05:38 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:38.465278 | orchestrator | 2025-06-01 23:05:38 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:38.465517 | orchestrator | 2025-06-01 23:05:38 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:38.465637 | orchestrator | 2025-06-01 23:05:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:41.513130 | orchestrator | 2025-06-01 23:05:41 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:05:41.513493 | orchestrator | 2025-06-01 23:05:41 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:41.514239 | orchestrator | 2025-06-01 23:05:41 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:41.515076 | orchestrator | 2025-06-01 23:05:41 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:41.515101 | orchestrator | 2025-06-01 23:05:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:44.563982 | orchestrator | 2025-06-01 23:05:44 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:05:44.564984 | orchestrator | 2025-06-01 23:05:44 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:44.567119 | orchestrator | 2025-06-01 23:05:44 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:44.569473 | orchestrator | 2025-06-01 23:05:44 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:44.569497 | orchestrator | 2025-06-01 23:05:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:47.608570 | orchestrator | 2025-06-01 23:05:47 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:05:47.609181 | orchestrator | 2025-06-01 23:05:47 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:47.609872 | orchestrator | 2025-06-01 23:05:47 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:47.610797 | orchestrator | 2025-06-01 23:05:47 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:47.610816 | orchestrator | 2025-06-01 23:05:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:50.645400 | orchestrator | 2025-06-01 23:05:50 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:05:50.648992 | orchestrator | 2025-06-01 23:05:50 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:50.649027 | orchestrator | 2025-06-01 23:05:50 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:50.649041 | orchestrator | 2025-06-01 23:05:50 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:50.649053 | orchestrator | 2025-06-01 23:05:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:53.689966 | orchestrator | 2025-06-01 23:05:53 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:05:53.690236 | orchestrator | 2025-06-01 23:05:53 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:53.690846 | orchestrator | 2025-06-01 23:05:53 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:53.691618 | orchestrator | 2025-06-01 23:05:53 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:53.691637 | orchestrator | 2025-06-01 23:05:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:56.736757 | orchestrator | 2025-06-01 23:05:56 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:05:56.738369 | orchestrator | 2025-06-01 23:05:56 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:56.739298 | orchestrator | 2025-06-01 23:05:56 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:56.741270 | orchestrator | 2025-06-01 23:05:56 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:56.741361 | orchestrator | 2025-06-01 23:05:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:05:59.770815 | orchestrator | 2025-06-01 23:05:59 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:05:59.770950 | orchestrator | 2025-06-01 23:05:59 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:05:59.772957 | orchestrator | 2025-06-01 23:05:59 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:05:59.773662 | orchestrator | 2025-06-01 23:05:59 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:05:59.773685 | orchestrator | 2025-06-01 23:05:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:02.800344 | orchestrator | 2025-06-01 23:06:02 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:02.800627 | orchestrator | 2025-06-01 23:06:02 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:06:02.801663 | orchestrator | 2025-06-01 23:06:02 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:02.802693 | orchestrator | 2025-06-01 23:06:02 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:02.802743 | orchestrator | 2025-06-01 23:06:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:05.840434 | orchestrator | 2025-06-01 23:06:05 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:05.840572 | orchestrator | 2025-06-01 23:06:05 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:06:05.841183 | orchestrator | 2025-06-01 23:06:05 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:05.841552 | orchestrator | 2025-06-01 23:06:05 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:05.841571 | orchestrator | 2025-06-01 23:06:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:08.869863 | orchestrator | 2025-06-01 23:06:08 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:08.870572 | orchestrator | 2025-06-01 23:06:08 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:06:08.871238 | orchestrator | 2025-06-01 23:06:08 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:08.871965 | orchestrator | 2025-06-01 23:06:08 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:08.871988 | orchestrator | 2025-06-01 23:06:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:11.913341 | orchestrator | 2025-06-01 23:06:11 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:11.915259 | orchestrator | 2025-06-01 23:06:11 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:06:11.917339 | orchestrator | 2025-06-01 23:06:11 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:11.918783 | orchestrator | 2025-06-01 23:06:11 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:11.918975 | orchestrator | 2025-06-01 23:06:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:14.948933 | orchestrator | 2025-06-01 23:06:14 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:14.949494 | orchestrator | 2025-06-01 23:06:14 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state STARTED 2025-06-01 23:06:14.950282 | orchestrator | 2025-06-01 23:06:14 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:14.952021 | orchestrator | 2025-06-01 23:06:14 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:14.952047 | orchestrator | 2025-06-01 23:06:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:17.985919 | orchestrator | 2025-06-01 23:06:17 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:17.986108 | orchestrator | 2025-06-01 23:06:17 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:17.988624 | orchestrator | 2025-06-01 23:06:17.988659 | orchestrator | 2025-06-01 23:06:17.988821 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:06:17.988834 | orchestrator | 2025-06-01 23:06:17.988846 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:06:17.988858 | orchestrator | Sunday 01 June 2025 23:04:18 +0000 (0:00:00.272) 0:00:00.272 *********** 2025-06-01 23:06:17.988870 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:06:17.988882 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:06:17.988893 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:06:17.988904 | orchestrator | 2025-06-01 23:06:17.988915 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:06:17.988926 | orchestrator | Sunday 01 June 2025 23:04:18 +0000 (0:00:00.306) 0:00:00.578 *********** 2025-06-01 23:06:17.988938 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-01 23:06:17.988950 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-01 23:06:17.988961 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-01 23:06:17.988972 | orchestrator | 2025-06-01 23:06:17.988983 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-01 23:06:17.988994 | orchestrator | 2025-06-01 23:06:17.989004 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-01 23:06:17.989015 | orchestrator | Sunday 01 June 2025 23:04:19 +0000 (0:00:00.556) 0:00:01.135 *********** 2025-06-01 23:06:17.989026 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:06:17.989038 | orchestrator | 2025-06-01 23:06:17.989049 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-01 23:06:17.989059 | orchestrator | Sunday 01 June 2025 23:04:19 +0000 (0:00:00.573) 0:00:01.708 *********** 2025-06-01 23:06:17.989071 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-01 23:06:17.989082 | orchestrator | 2025-06-01 23:06:17.989092 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-01 23:06:17.989123 | orchestrator | Sunday 01 June 2025 23:04:22 +0000 (0:00:03.153) 0:00:04.861 *********** 2025-06-01 23:06:17.989134 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-01 23:06:17.989146 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-01 23:06:17.989181 | orchestrator | 2025-06-01 23:06:17.989196 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-01 23:06:17.989209 | orchestrator | Sunday 01 June 2025 23:04:28 +0000 (0:00:06.118) 0:00:10.980 *********** 2025-06-01 23:06:17.989222 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 23:06:17.989234 | orchestrator | 2025-06-01 23:06:17.989247 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-01 23:06:17.989260 | orchestrator | Sunday 01 June 2025 23:04:32 +0000 (0:00:03.048) 0:00:14.028 *********** 2025-06-01 23:06:17.989273 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 23:06:17.989286 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-01 23:06:17.989298 | orchestrator | 2025-06-01 23:06:17.989311 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-01 23:06:17.989324 | orchestrator | Sunday 01 June 2025 23:04:35 +0000 (0:00:03.756) 0:00:17.784 *********** 2025-06-01 23:06:17.989336 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 23:06:17.989349 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-01 23:06:17.989362 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-01 23:06:17.989375 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-01 23:06:17.989387 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-01 23:06:17.989400 | orchestrator | 2025-06-01 23:06:17.989412 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-01 23:06:17.989425 | orchestrator | Sunday 01 June 2025 23:04:50 +0000 (0:00:14.809) 0:00:32.594 *********** 2025-06-01 23:06:17.989437 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-01 23:06:17.989450 | orchestrator | 2025-06-01 23:06:17.989463 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-01 23:06:17.989476 | orchestrator | Sunday 01 June 2025 23:04:54 +0000 (0:00:04.120) 0:00:36.715 *********** 2025-06-01 23:06:17.989493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.989527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.989547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.989568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.989581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.989593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.989613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.989627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.989651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.989663 | orchestrator | 2025-06-01 23:06:17.989674 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-01 23:06:17.989686 | orchestrator | Sunday 01 June 2025 23:04:56 +0000 (0:00:01.810) 0:00:38.526 *********** 2025-06-01 23:06:17.989697 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-01 23:06:17.989728 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-01 23:06:17.989739 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-01 23:06:17.989750 | orchestrator | 2025-06-01 23:06:17.989761 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-01 23:06:17.989772 | orchestrator | Sunday 01 June 2025 23:04:58 +0000 (0:00:01.925) 0:00:40.452 *********** 2025-06-01 23:06:17.989783 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:06:17.989793 | orchestrator | 2025-06-01 23:06:17.989804 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-01 23:06:17.989815 | orchestrator | Sunday 01 June 2025 23:04:58 +0000 (0:00:00.418) 0:00:40.870 *********** 2025-06-01 23:06:17.989825 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:06:17.989836 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:06:17.989847 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:06:17.989858 | orchestrator | 2025-06-01 23:06:17.989869 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-01 23:06:17.989879 | orchestrator | Sunday 01 June 2025 23:05:00 +0000 (0:00:01.323) 0:00:42.194 *********** 2025-06-01 23:06:17.989890 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:06:17.989901 | orchestrator | 2025-06-01 23:06:17.989912 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-01 23:06:17.989922 | orchestrator | Sunday 01 June 2025 23:05:00 +0000 (0:00:00.578) 0:00:42.773 *********** 2025-06-01 23:06:17.989934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.989954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.989978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.989990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.990002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.990014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.990071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.990099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.990112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.990123 | orchestrator | 2025-06-01 23:06:17.990134 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-01 23:06:17.990145 | orchestrator | Sunday 01 June 2025 23:05:04 +0000 (0:00:03.918) 0:00:46.692 *********** 2025-06-01 23:06:17.990163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 23:06:17.990175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.990187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.990199 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:06:17.990225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 23:06:17.990237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.990249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.990260 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:06:17.990273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 23:06:17.990284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.990340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.990362 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:06:17.990373 | orchestrator | 2025-06-01 23:06:17.990391 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-01 23:06:17.990402 | orchestrator | Sunday 01 June 2025 23:05:05 +0000 (0:00:01.160) 0:00:47.852 *********** 2025-06-01 23:06:17.990414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 23:06:17.990432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.990443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.990455 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:06:17.990466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 23:06:17.990492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.990511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.990522 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:06:17.990539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 23:06:17.990551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.990563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.990574 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:06:17.990585 | orchestrator | 2025-06-01 23:06:17.990596 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-01 23:06:17.990607 | orchestrator | Sunday 01 June 2025 23:05:07 +0000 (0:00:01.448) 0:00:49.300 *********** 2025-06-01 23:06:17.990625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.990644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-ap2025-06-01 23:06:17 | INFO  | Task 93223891-b177-40b7-bdcc-9b0ddc34a9d0 is in state SUCCESS 2025-06-01 23:06:17.991002 | orchestrator | i:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.991031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.991044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991137 | orchestrator | 2025-06-01 23:06:17.991148 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-01 23:06:17.991159 | orchestrator | Sunday 01 June 2025 23:05:10 +0000 (0:00:03.159) 0:00:52.460 *********** 2025-06-01 23:06:17.991170 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:06:17.991182 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:06:17.991193 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:06:17.991204 | orchestrator | 2025-06-01 23:06:17.991215 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-01 23:06:17.991226 | orchestrator | Sunday 01 June 2025 23:05:12 +0000 (0:00:02.451) 0:00:54.911 *********** 2025-06-01 23:06:17.991237 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:06:17.991248 | orchestrator | 2025-06-01 23:06:17.991259 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-01 23:06:17.991270 | orchestrator | Sunday 01 June 2025 23:05:15 +0000 (0:00:02.345) 0:00:57.256 *********** 2025-06-01 23:06:17.991281 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:06:17.991299 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:06:17.991310 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:06:17.991321 | orchestrator | 2025-06-01 23:06:17.991332 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-01 23:06:17.991343 | orchestrator | Sunday 01 June 2025 23:05:15 +0000 (0:00:00.454) 0:00:57.711 *********** 2025-06-01 23:06:17.991355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.991373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.991385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.991402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991484 | orchestrator | 2025-06-01 23:06:17.991495 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-01 23:06:17.991506 | orchestrator | Sunday 01 June 2025 23:05:25 +0000 (0:00:09.637) 0:01:07.349 *********** 2025-06-01 23:06:17.991523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 23:06:17.991543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.991555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.991567 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:06:17.991584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 23:06:17.991596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.991608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.991626 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:06:17.991647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 23:06:17.991661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.991674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:06:17.991688 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:06:17.991755 | orchestrator | 2025-06-01 23:06:17.991769 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-01 23:06:17.991783 | orchestrator | Sunday 01 June 2025 23:05:26 +0000 (0:00:00.725) 0:01:08.075 *********** 2025-06-01 23:06:17.991805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.991825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.991847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:06:17.991860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:06:17.991962 | orchestrator | 2025-06-01 23:06:17.991974 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-01 23:06:17.991985 | orchestrator | Sunday 01 June 2025 23:05:28 +0000 (0:00:02.489) 0:01:10.564 *********** 2025-06-01 23:06:17.991996 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:06:17.992007 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:06:17.992018 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:06:17.992029 | orchestrator | 2025-06-01 23:06:17.992040 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-01 23:06:17.992051 | orchestrator | Sunday 01 June 2025 23:05:28 +0000 (0:00:00.336) 0:01:10.901 *********** 2025-06-01 23:06:17.992062 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:06:17.992073 | orchestrator | 2025-06-01 23:06:17.992084 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-01 23:06:17.992095 | orchestrator | Sunday 01 June 2025 23:05:31 +0000 (0:00:02.128) 0:01:13.029 *********** 2025-06-01 23:06:17.992106 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:06:17.992116 | orchestrator | 2025-06-01 23:06:17.992127 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-01 23:06:17.992138 | orchestrator | Sunday 01 June 2025 23:05:33 +0000 (0:00:02.292) 0:01:15.322 *********** 2025-06-01 23:06:17.992149 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:06:17.992160 | orchestrator | 2025-06-01 23:06:17.992171 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-01 23:06:17.992181 | orchestrator | Sunday 01 June 2025 23:05:45 +0000 (0:00:11.810) 0:01:27.133 *********** 2025-06-01 23:06:17.992192 | orchestrator | 2025-06-01 23:06:17.992203 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-01 23:06:17.992214 | orchestrator | Sunday 01 June 2025 23:05:45 +0000 (0:00:00.066) 0:01:27.199 *********** 2025-06-01 23:06:17.992225 | orchestrator | 2025-06-01 23:06:17.992236 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-01 23:06:17.992247 | orchestrator | Sunday 01 June 2025 23:05:45 +0000 (0:00:00.067) 0:01:27.266 *********** 2025-06-01 23:06:17.992258 | orchestrator | 2025-06-01 23:06:17.992268 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-01 23:06:17.992279 | orchestrator | Sunday 01 June 2025 23:05:45 +0000 (0:00:00.091) 0:01:27.358 *********** 2025-06-01 23:06:17.992290 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:06:17.992301 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:06:17.992312 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:06:17.992323 | orchestrator | 2025-06-01 23:06:17.992334 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-01 23:06:17.992345 | orchestrator | Sunday 01 June 2025 23:05:58 +0000 (0:00:12.762) 0:01:40.121 *********** 2025-06-01 23:06:17.992364 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:06:17.992375 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:06:17.992392 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:06:17.992403 | orchestrator | 2025-06-01 23:06:17.992414 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-01 23:06:17.992426 | orchestrator | Sunday 01 June 2025 23:06:08 +0000 (0:00:09.876) 0:01:49.998 *********** 2025-06-01 23:06:17.992437 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:06:17.992448 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:06:17.992458 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:06:17.992469 | orchestrator | 2025-06-01 23:06:17.992480 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:06:17.992493 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-01 23:06:17.992505 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 23:06:17.992516 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 23:06:17.992527 | orchestrator | 2025-06-01 23:06:17.992538 | orchestrator | 2025-06-01 23:06:17.992549 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:06:17.992560 | orchestrator | Sunday 01 June 2025 23:06:15 +0000 (0:00:07.235) 0:01:57.234 *********** 2025-06-01 23:06:17.992571 | orchestrator | =============================================================================== 2025-06-01 23:06:17.992581 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.81s 2025-06-01 23:06:17.992593 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.76s 2025-06-01 23:06:17.992608 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.81s 2025-06-01 23:06:17.992620 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.88s 2025-06-01 23:06:17.992631 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.64s 2025-06-01 23:06:17.992641 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.24s 2025-06-01 23:06:17.992652 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.12s 2025-06-01 23:06:17.992663 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.12s 2025-06-01 23:06:17.992674 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.92s 2025-06-01 23:06:17.992685 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.76s 2025-06-01 23:06:17.992696 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.16s 2025-06-01 23:06:17.992723 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.15s 2025-06-01 23:06:17.992733 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.05s 2025-06-01 23:06:17.992744 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.49s 2025-06-01 23:06:17.992755 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.45s 2025-06-01 23:06:17.992766 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.35s 2025-06-01 23:06:17.992777 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.29s 2025-06-01 23:06:17.992787 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.13s 2025-06-01 23:06:17.992798 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.93s 2025-06-01 23:06:17.992809 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.81s 2025-06-01 23:06:17.992820 | orchestrator | 2025-06-01 23:06:17 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:17.992839 | orchestrator | 2025-06-01 23:06:17 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:17.992850 | orchestrator | 2025-06-01 23:06:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:21.016191 | orchestrator | 2025-06-01 23:06:21 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:21.016320 | orchestrator | 2025-06-01 23:06:21 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:21.016757 | orchestrator | 2025-06-01 23:06:21 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:21.017126 | orchestrator | 2025-06-01 23:06:21 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:21.017285 | orchestrator | 2025-06-01 23:06:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:24.044118 | orchestrator | 2025-06-01 23:06:24 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:24.044248 | orchestrator | 2025-06-01 23:06:24 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:24.045070 | orchestrator | 2025-06-01 23:06:24 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:24.045772 | orchestrator | 2025-06-01 23:06:24 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:24.045827 | orchestrator | 2025-06-01 23:06:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:27.067598 | orchestrator | 2025-06-01 23:06:27 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:27.067806 | orchestrator | 2025-06-01 23:06:27 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:27.068347 | orchestrator | 2025-06-01 23:06:27 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:27.068831 | orchestrator | 2025-06-01 23:06:27 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:27.069451 | orchestrator | 2025-06-01 23:06:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:30.096856 | orchestrator | 2025-06-01 23:06:30 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:30.099988 | orchestrator | 2025-06-01 23:06:30 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:30.100806 | orchestrator | 2025-06-01 23:06:30 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:30.101928 | orchestrator | 2025-06-01 23:06:30 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:30.101948 | orchestrator | 2025-06-01 23:06:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:33.126763 | orchestrator | 2025-06-01 23:06:33 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:33.128481 | orchestrator | 2025-06-01 23:06:33 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:33.129254 | orchestrator | 2025-06-01 23:06:33 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:33.130214 | orchestrator | 2025-06-01 23:06:33 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:33.130325 | orchestrator | 2025-06-01 23:06:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:36.156622 | orchestrator | 2025-06-01 23:06:36 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:36.156937 | orchestrator | 2025-06-01 23:06:36 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:36.157737 | orchestrator | 2025-06-01 23:06:36 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:36.158440 | orchestrator | 2025-06-01 23:06:36 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:36.159470 | orchestrator | 2025-06-01 23:06:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:39.185686 | orchestrator | 2025-06-01 23:06:39 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:39.186120 | orchestrator | 2025-06-01 23:06:39 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:39.186446 | orchestrator | 2025-06-01 23:06:39 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:39.187158 | orchestrator | 2025-06-01 23:06:39 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:39.187319 | orchestrator | 2025-06-01 23:06:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:42.224237 | orchestrator | 2025-06-01 23:06:42 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:42.224394 | orchestrator | 2025-06-01 23:06:42 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:42.225068 | orchestrator | 2025-06-01 23:06:42 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:42.225961 | orchestrator | 2025-06-01 23:06:42 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:42.226014 | orchestrator | 2025-06-01 23:06:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:45.266990 | orchestrator | 2025-06-01 23:06:45 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:45.267133 | orchestrator | 2025-06-01 23:06:45 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:45.268064 | orchestrator | 2025-06-01 23:06:45 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:45.268994 | orchestrator | 2025-06-01 23:06:45 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:45.269019 | orchestrator | 2025-06-01 23:06:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:48.314558 | orchestrator | 2025-06-01 23:06:48 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:48.314698 | orchestrator | 2025-06-01 23:06:48 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:48.314803 | orchestrator | 2025-06-01 23:06:48 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:48.315620 | orchestrator | 2025-06-01 23:06:48 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:48.315669 | orchestrator | 2025-06-01 23:06:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:51.349839 | orchestrator | 2025-06-01 23:06:51 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:51.349978 | orchestrator | 2025-06-01 23:06:51 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:51.350200 | orchestrator | 2025-06-01 23:06:51 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:51.350816 | orchestrator | 2025-06-01 23:06:51 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:51.350839 | orchestrator | 2025-06-01 23:06:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:54.374080 | orchestrator | 2025-06-01 23:06:54 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:54.374210 | orchestrator | 2025-06-01 23:06:54 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:54.374336 | orchestrator | 2025-06-01 23:06:54 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:54.374914 | orchestrator | 2025-06-01 23:06:54 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:54.374936 | orchestrator | 2025-06-01 23:06:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:06:57.395185 | orchestrator | 2025-06-01 23:06:57 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:06:57.396541 | orchestrator | 2025-06-01 23:06:57 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:06:57.397183 | orchestrator | 2025-06-01 23:06:57 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:06:57.398147 | orchestrator | 2025-06-01 23:06:57 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:06:57.398346 | orchestrator | 2025-06-01 23:06:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:00.433367 | orchestrator | 2025-06-01 23:07:00 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:00.433476 | orchestrator | 2025-06-01 23:07:00 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:00.436455 | orchestrator | 2025-06-01 23:07:00 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:00.439072 | orchestrator | 2025-06-01 23:07:00 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:00.439103 | orchestrator | 2025-06-01 23:07:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:03.472121 | orchestrator | 2025-06-01 23:07:03 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:03.472249 | orchestrator | 2025-06-01 23:07:03 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:03.472916 | orchestrator | 2025-06-01 23:07:03 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:03.473977 | orchestrator | 2025-06-01 23:07:03 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:03.474001 | orchestrator | 2025-06-01 23:07:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:06.520057 | orchestrator | 2025-06-01 23:07:06 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:06.520177 | orchestrator | 2025-06-01 23:07:06 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:06.522611 | orchestrator | 2025-06-01 23:07:06 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:06.524312 | orchestrator | 2025-06-01 23:07:06 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:06.524347 | orchestrator | 2025-06-01 23:07:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:09.560159 | orchestrator | 2025-06-01 23:07:09 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:09.560268 | orchestrator | 2025-06-01 23:07:09 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:09.561347 | orchestrator | 2025-06-01 23:07:09 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:09.563795 | orchestrator | 2025-06-01 23:07:09 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:09.563822 | orchestrator | 2025-06-01 23:07:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:12.605326 | orchestrator | 2025-06-01 23:07:12 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:12.605687 | orchestrator | 2025-06-01 23:07:12 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:12.606601 | orchestrator | 2025-06-01 23:07:12 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:12.607485 | orchestrator | 2025-06-01 23:07:12 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:12.609674 | orchestrator | 2025-06-01 23:07:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:15.646060 | orchestrator | 2025-06-01 23:07:15 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:15.647151 | orchestrator | 2025-06-01 23:07:15 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:15.650590 | orchestrator | 2025-06-01 23:07:15 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:15.650618 | orchestrator | 2025-06-01 23:07:15 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:15.650631 | orchestrator | 2025-06-01 23:07:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:18.698545 | orchestrator | 2025-06-01 23:07:18 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:18.699670 | orchestrator | 2025-06-01 23:07:18 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:18.701868 | orchestrator | 2025-06-01 23:07:18 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:18.703941 | orchestrator | 2025-06-01 23:07:18 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:18.703967 | orchestrator | 2025-06-01 23:07:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:21.746150 | orchestrator | 2025-06-01 23:07:21 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:21.747560 | orchestrator | 2025-06-01 23:07:21 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:21.748596 | orchestrator | 2025-06-01 23:07:21 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:21.753561 | orchestrator | 2025-06-01 23:07:21 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:21.753603 | orchestrator | 2025-06-01 23:07:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:24.792010 | orchestrator | 2025-06-01 23:07:24 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:24.792638 | orchestrator | 2025-06-01 23:07:24 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:24.793698 | orchestrator | 2025-06-01 23:07:24 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:24.797104 | orchestrator | 2025-06-01 23:07:24 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:24.797136 | orchestrator | 2025-06-01 23:07:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:27.841682 | orchestrator | 2025-06-01 23:07:27 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:27.843385 | orchestrator | 2025-06-01 23:07:27 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:27.845154 | orchestrator | 2025-06-01 23:07:27 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:27.846551 | orchestrator | 2025-06-01 23:07:27 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:27.846916 | orchestrator | 2025-06-01 23:07:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:30.904137 | orchestrator | 2025-06-01 23:07:30 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:30.908962 | orchestrator | 2025-06-01 23:07:30 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:30.909006 | orchestrator | 2025-06-01 23:07:30 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:30.911613 | orchestrator | 2025-06-01 23:07:30 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:30.911905 | orchestrator | 2025-06-01 23:07:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:33.963656 | orchestrator | 2025-06-01 23:07:33 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:33.963982 | orchestrator | 2025-06-01 23:07:33 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:33.964905 | orchestrator | 2025-06-01 23:07:33 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:33.965620 | orchestrator | 2025-06-01 23:07:33 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:33.965645 | orchestrator | 2025-06-01 23:07:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:37.025561 | orchestrator | 2025-06-01 23:07:37 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:37.028299 | orchestrator | 2025-06-01 23:07:37 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:37.029420 | orchestrator | 2025-06-01 23:07:37 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:37.031176 | orchestrator | 2025-06-01 23:07:37 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:37.031209 | orchestrator | 2025-06-01 23:07:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:40.074264 | orchestrator | 2025-06-01 23:07:40 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:40.075585 | orchestrator | 2025-06-01 23:07:40 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:40.077589 | orchestrator | 2025-06-01 23:07:40 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:40.079438 | orchestrator | 2025-06-01 23:07:40 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:40.079463 | orchestrator | 2025-06-01 23:07:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:43.127481 | orchestrator | 2025-06-01 23:07:43 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:43.127565 | orchestrator | 2025-06-01 23:07:43 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:43.128449 | orchestrator | 2025-06-01 23:07:43 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:43.131572 | orchestrator | 2025-06-01 23:07:43 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:43.131606 | orchestrator | 2025-06-01 23:07:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:46.175919 | orchestrator | 2025-06-01 23:07:46 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:46.176616 | orchestrator | 2025-06-01 23:07:46 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:46.177524 | orchestrator | 2025-06-01 23:07:46 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:46.178518 | orchestrator | 2025-06-01 23:07:46 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:46.178858 | orchestrator | 2025-06-01 23:07:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:49.224872 | orchestrator | 2025-06-01 23:07:49 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:49.225260 | orchestrator | 2025-06-01 23:07:49 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:49.227510 | orchestrator | 2025-06-01 23:07:49 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:49.229103 | orchestrator | 2025-06-01 23:07:49 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:49.230526 | orchestrator | 2025-06-01 23:07:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:52.272141 | orchestrator | 2025-06-01 23:07:52 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:52.274180 | orchestrator | 2025-06-01 23:07:52 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:52.277366 | orchestrator | 2025-06-01 23:07:52 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:52.278948 | orchestrator | 2025-06-01 23:07:52 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:52.279270 | orchestrator | 2025-06-01 23:07:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:55.334297 | orchestrator | 2025-06-01 23:07:55 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:55.335156 | orchestrator | 2025-06-01 23:07:55 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:55.337504 | orchestrator | 2025-06-01 23:07:55 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:55.339249 | orchestrator | 2025-06-01 23:07:55 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:55.339275 | orchestrator | 2025-06-01 23:07:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:07:58.398396 | orchestrator | 2025-06-01 23:07:58 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:07:58.398504 | orchestrator | 2025-06-01 23:07:58 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:07:58.400954 | orchestrator | 2025-06-01 23:07:58 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:07:58.404064 | orchestrator | 2025-06-01 23:07:58 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:07:58.404359 | orchestrator | 2025-06-01 23:07:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:01.457330 | orchestrator | 2025-06-01 23:08:01 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:01.460409 | orchestrator | 2025-06-01 23:08:01 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:01.463539 | orchestrator | 2025-06-01 23:08:01 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:01.466668 | orchestrator | 2025-06-01 23:08:01 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:01.467097 | orchestrator | 2025-06-01 23:08:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:04.509624 | orchestrator | 2025-06-01 23:08:04 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:04.512613 | orchestrator | 2025-06-01 23:08:04 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:04.513329 | orchestrator | 2025-06-01 23:08:04 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:04.515002 | orchestrator | 2025-06-01 23:08:04 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:04.515035 | orchestrator | 2025-06-01 23:08:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:07.577626 | orchestrator | 2025-06-01 23:08:07 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:07.579262 | orchestrator | 2025-06-01 23:08:07 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:07.581066 | orchestrator | 2025-06-01 23:08:07 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:07.584384 | orchestrator | 2025-06-01 23:08:07 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:07.584414 | orchestrator | 2025-06-01 23:08:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:10.640485 | orchestrator | 2025-06-01 23:08:10 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:10.644854 | orchestrator | 2025-06-01 23:08:10 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:10.646568 | orchestrator | 2025-06-01 23:08:10 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:10.648164 | orchestrator | 2025-06-01 23:08:10 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:10.648617 | orchestrator | 2025-06-01 23:08:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:13.710671 | orchestrator | 2025-06-01 23:08:13 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:13.710974 | orchestrator | 2025-06-01 23:08:13 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:13.712833 | orchestrator | 2025-06-01 23:08:13 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:13.713132 | orchestrator | 2025-06-01 23:08:13 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:13.713236 | orchestrator | 2025-06-01 23:08:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:16.769348 | orchestrator | 2025-06-01 23:08:16 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:16.769760 | orchestrator | 2025-06-01 23:08:16 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:16.771093 | orchestrator | 2025-06-01 23:08:16 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:16.772065 | orchestrator | 2025-06-01 23:08:16 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:16.772160 | orchestrator | 2025-06-01 23:08:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:19.837899 | orchestrator | 2025-06-01 23:08:19 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:19.839807 | orchestrator | 2025-06-01 23:08:19 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:19.841320 | orchestrator | 2025-06-01 23:08:19 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:19.842785 | orchestrator | 2025-06-01 23:08:19 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:19.843037 | orchestrator | 2025-06-01 23:08:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:22.892243 | orchestrator | 2025-06-01 23:08:22 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:22.894297 | orchestrator | 2025-06-01 23:08:22 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:22.896599 | orchestrator | 2025-06-01 23:08:22 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:22.898432 | orchestrator | 2025-06-01 23:08:22 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:22.898620 | orchestrator | 2025-06-01 23:08:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:25.966594 | orchestrator | 2025-06-01 23:08:25 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:25.966703 | orchestrator | 2025-06-01 23:08:25 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:25.967365 | orchestrator | 2025-06-01 23:08:25 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:25.968052 | orchestrator | 2025-06-01 23:08:25 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:25.968074 | orchestrator | 2025-06-01 23:08:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:29.026321 | orchestrator | 2025-06-01 23:08:29 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:29.026432 | orchestrator | 2025-06-01 23:08:29 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:29.026447 | orchestrator | 2025-06-01 23:08:29 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:29.026460 | orchestrator | 2025-06-01 23:08:29 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:29.026473 | orchestrator | 2025-06-01 23:08:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:32.081153 | orchestrator | 2025-06-01 23:08:32 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:32.083791 | orchestrator | 2025-06-01 23:08:32 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:32.085619 | orchestrator | 2025-06-01 23:08:32 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:32.087811 | orchestrator | 2025-06-01 23:08:32 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:32.087841 | orchestrator | 2025-06-01 23:08:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:35.173504 | orchestrator | 2025-06-01 23:08:35 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:35.177493 | orchestrator | 2025-06-01 23:08:35 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:35.180383 | orchestrator | 2025-06-01 23:08:35 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:35.182930 | orchestrator | 2025-06-01 23:08:35 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:35.183155 | orchestrator | 2025-06-01 23:08:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:38.254546 | orchestrator | 2025-06-01 23:08:38 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:38.256090 | orchestrator | 2025-06-01 23:08:38 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:38.257388 | orchestrator | 2025-06-01 23:08:38 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:38.259162 | orchestrator | 2025-06-01 23:08:38 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:38.259263 | orchestrator | 2025-06-01 23:08:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:41.309303 | orchestrator | 2025-06-01 23:08:41 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:41.311609 | orchestrator | 2025-06-01 23:08:41 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:41.313464 | orchestrator | 2025-06-01 23:08:41 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:41.315210 | orchestrator | 2025-06-01 23:08:41 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state STARTED 2025-06-01 23:08:41.315245 | orchestrator | 2025-06-01 23:08:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:44.383108 | orchestrator | 2025-06-01 23:08:44 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:44.387829 | orchestrator | 2025-06-01 23:08:44 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state STARTED 2025-06-01 23:08:44.389755 | orchestrator | 2025-06-01 23:08:44 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:44.394800 | orchestrator | 2025-06-01 23:08:44 | INFO  | Task 55290e10-0c9a-4745-b9d8-37144f40f317 is in state SUCCESS 2025-06-01 23:08:44.396992 | orchestrator | 2025-06-01 23:08:44.397206 | orchestrator | 2025-06-01 23:08:44.397245 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:08:44.397269 | orchestrator | 2025-06-01 23:08:44.397291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:08:44.397312 | orchestrator | Sunday 01 June 2025 23:04:05 +0000 (0:00:00.247) 0:00:00.247 *********** 2025-06-01 23:08:44.397327 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:08:44.397339 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:08:44.397350 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:08:44.397362 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:08:44.397372 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:08:44.397383 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:08:44.397394 | orchestrator | 2025-06-01 23:08:44.397409 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:08:44.397428 | orchestrator | Sunday 01 June 2025 23:04:06 +0000 (0:00:00.568) 0:00:00.816 *********** 2025-06-01 23:08:44.397446 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-01 23:08:44.397466 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-01 23:08:44.397484 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-01 23:08:44.397502 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-01 23:08:44.397520 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-01 23:08:44.397774 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-01 23:08:44.397831 | orchestrator | 2025-06-01 23:08:44.397848 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-01 23:08:44.397862 | orchestrator | 2025-06-01 23:08:44.397875 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-01 23:08:44.397888 | orchestrator | Sunday 01 June 2025 23:04:06 +0000 (0:00:00.550) 0:00:01.366 *********** 2025-06-01 23:08:44.397904 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:08:44.397948 | orchestrator | 2025-06-01 23:08:44.397960 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-01 23:08:44.397971 | orchestrator | Sunday 01 June 2025 23:04:07 +0000 (0:00:01.026) 0:00:02.393 *********** 2025-06-01 23:08:44.397982 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:08:44.397993 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:08:44.398004 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:08:44.398061 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:08:44.398077 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:08:44.398088 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:08:44.398099 | orchestrator | 2025-06-01 23:08:44.398110 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-01 23:08:44.398121 | orchestrator | Sunday 01 June 2025 23:04:09 +0000 (0:00:01.423) 0:00:03.817 *********** 2025-06-01 23:08:44.398132 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:08:44.398143 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:08:44.398153 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:08:44.398164 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:08:44.398175 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:08:44.398186 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:08:44.398196 | orchestrator | 2025-06-01 23:08:44.398207 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-01 23:08:44.398219 | orchestrator | Sunday 01 June 2025 23:04:10 +0000 (0:00:01.406) 0:00:05.223 *********** 2025-06-01 23:08:44.398229 | orchestrator | ok: [testbed-node-0] => { 2025-06-01 23:08:44.398241 | orchestrator |  "changed": false, 2025-06-01 23:08:44.398253 | orchestrator |  "msg": "All assertions passed" 2025-06-01 23:08:44.398264 | orchestrator | } 2025-06-01 23:08:44.398275 | orchestrator | ok: [testbed-node-1] => { 2025-06-01 23:08:44.398286 | orchestrator |  "changed": false, 2025-06-01 23:08:44.398297 | orchestrator |  "msg": "All assertions passed" 2025-06-01 23:08:44.398307 | orchestrator | } 2025-06-01 23:08:44.398318 | orchestrator | ok: [testbed-node-2] => { 2025-06-01 23:08:44.398329 | orchestrator |  "changed": false, 2025-06-01 23:08:44.398340 | orchestrator |  "msg": "All assertions passed" 2025-06-01 23:08:44.398351 | orchestrator | } 2025-06-01 23:08:44.398362 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 23:08:44.398373 | orchestrator |  "changed": false, 2025-06-01 23:08:44.398384 | orchestrator |  "msg": "All assertions passed" 2025-06-01 23:08:44.398394 | orchestrator | } 2025-06-01 23:08:44.398405 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 23:08:44.398416 | orchestrator |  "changed": false, 2025-06-01 23:08:44.398427 | orchestrator |  "msg": "All assertions passed" 2025-06-01 23:08:44.398437 | orchestrator | } 2025-06-01 23:08:44.398448 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 23:08:44.398459 | orchestrator |  "changed": false, 2025-06-01 23:08:44.398469 | orchestrator |  "msg": "All assertions passed" 2025-06-01 23:08:44.398480 | orchestrator | } 2025-06-01 23:08:44.398491 | orchestrator | 2025-06-01 23:08:44.398502 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-01 23:08:44.398512 | orchestrator | Sunday 01 June 2025 23:04:11 +0000 (0:00:00.862) 0:00:06.085 *********** 2025-06-01 23:08:44.398523 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.398534 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.398545 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.398556 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.398566 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.398577 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.398588 | orchestrator | 2025-06-01 23:08:44.398613 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-01 23:08:44.398624 | orchestrator | Sunday 01 June 2025 23:04:12 +0000 (0:00:00.598) 0:00:06.684 *********** 2025-06-01 23:08:44.398635 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-01 23:08:44.398646 | orchestrator | 2025-06-01 23:08:44.398666 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-01 23:08:44.398677 | orchestrator | Sunday 01 June 2025 23:04:15 +0000 (0:00:03.223) 0:00:09.907 *********** 2025-06-01 23:08:44.398688 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-01 23:08:44.398701 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-01 23:08:44.398712 | orchestrator | 2025-06-01 23:08:44.398768 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-01 23:08:44.398780 | orchestrator | Sunday 01 June 2025 23:04:21 +0000 (0:00:06.116) 0:00:16.023 *********** 2025-06-01 23:08:44.398791 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 23:08:44.398802 | orchestrator | 2025-06-01 23:08:44.398813 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-01 23:08:44.398824 | orchestrator | Sunday 01 June 2025 23:04:24 +0000 (0:00:03.108) 0:00:19.132 *********** 2025-06-01 23:08:44.398834 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 23:08:44.398845 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-01 23:08:44.398856 | orchestrator | 2025-06-01 23:08:44.398867 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-01 23:08:44.398878 | orchestrator | Sunday 01 June 2025 23:04:28 +0000 (0:00:03.709) 0:00:22.842 *********** 2025-06-01 23:08:44.398888 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 23:08:44.398899 | orchestrator | 2025-06-01 23:08:44.398910 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-01 23:08:44.398921 | orchestrator | Sunday 01 June 2025 23:04:31 +0000 (0:00:03.230) 0:00:26.073 *********** 2025-06-01 23:08:44.398931 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-01 23:08:44.398942 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-01 23:08:44.398953 | orchestrator | 2025-06-01 23:08:44.398963 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-01 23:08:44.398974 | orchestrator | Sunday 01 June 2025 23:04:38 +0000 (0:00:07.225) 0:00:33.298 *********** 2025-06-01 23:08:44.398985 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.398996 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.399007 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.399017 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.399028 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.399039 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.399049 | orchestrator | 2025-06-01 23:08:44.399060 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-01 23:08:44.399071 | orchestrator | Sunday 01 June 2025 23:04:39 +0000 (0:00:00.788) 0:00:34.087 *********** 2025-06-01 23:08:44.399082 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.399092 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.399103 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.399114 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.399124 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.399135 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.399145 | orchestrator | 2025-06-01 23:08:44.399156 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-01 23:08:44.399167 | orchestrator | Sunday 01 June 2025 23:04:41 +0000 (0:00:02.247) 0:00:36.334 *********** 2025-06-01 23:08:44.399178 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:08:44.399189 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:08:44.399200 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:08:44.399219 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:08:44.399239 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:08:44.399290 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:08:44.399306 | orchestrator | 2025-06-01 23:08:44.399318 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-01 23:08:44.399339 | orchestrator | Sunday 01 June 2025 23:04:42 +0000 (0:00:01.080) 0:00:37.414 *********** 2025-06-01 23:08:44.399350 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.399361 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.399372 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.399382 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.399393 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.399404 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.399414 | orchestrator | 2025-06-01 23:08:44.399425 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-01 23:08:44.399436 | orchestrator | Sunday 01 June 2025 23:04:45 +0000 (0:00:02.805) 0:00:40.220 *********** 2025-06-01 23:08:44.399457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.399487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.399500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.399513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.399531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.399547 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.399559 | orchestrator | 2025-06-01 23:08:44.399570 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-01 23:08:44.399582 | orchestrator | Sunday 01 June 2025 23:04:49 +0000 (0:00:03.599) 0:00:43.819 *********** 2025-06-01 23:08:44.399593 | orchestrator | [WARNING]: Skipped 2025-06-01 23:08:44.399605 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-01 23:08:44.399616 | orchestrator | due to this access issue: 2025-06-01 23:08:44.399627 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-01 23:08:44.399638 | orchestrator | a directory 2025-06-01 23:08:44.399649 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:08:44.399660 | orchestrator | 2025-06-01 23:08:44.399676 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-01 23:08:44.399688 | orchestrator | Sunday 01 June 2025 23:04:50 +0000 (0:00:00.871) 0:00:44.691 *********** 2025-06-01 23:08:44.399699 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:08:44.399712 | orchestrator | 2025-06-01 23:08:44.399742 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-01 23:08:44.399754 | orchestrator | Sunday 01 June 2025 23:04:51 +0000 (0:00:01.350) 0:00:46.041 *********** 2025-06-01 23:08:44.399765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.399785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.399796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.399813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.399833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.399845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.399862 | orchestrator | 2025-06-01 23:08:44.399873 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-01 23:08:44.399884 | orchestrator | Sunday 01 June 2025 23:04:56 +0000 (0:00:04.917) 0:00:50.959 *********** 2025-06-01 23:08:44.399895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.399907 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.399919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.399930 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.399952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.399965 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.399976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.399993 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.400004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.400015 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.400027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.400038 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.400049 | orchestrator | 2025-06-01 23:08:44.400059 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-01 23:08:44.400070 | orchestrator | Sunday 01 June 2025 23:05:01 +0000 (0:00:05.026) 0:00:55.986 *********** 2025-06-01 23:08:44.400086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.400098 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.400116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.400134 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.400146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.400157 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.400168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.400179 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.400190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.400201 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.400222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.400234 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.400245 | orchestrator | 2025-06-01 23:08:44.400256 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-01 23:08:44.400272 | orchestrator | Sunday 01 June 2025 23:05:05 +0000 (0:00:03.678) 0:00:59.665 *********** 2025-06-01 23:08:44.400283 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.400301 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.400311 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.400322 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.400333 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.400343 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.400354 | orchestrator | 2025-06-01 23:08:44.400365 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-01 23:08:44.400375 | orchestrator | Sunday 01 June 2025 23:05:08 +0000 (0:00:03.138) 0:01:02.804 *********** 2025-06-01 23:08:44.400386 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.400397 | orchestrator | 2025-06-01 23:08:44.400408 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-01 23:08:44.400418 | orchestrator | Sunday 01 June 2025 23:05:08 +0000 (0:00:00.131) 0:01:02.936 *********** 2025-06-01 23:08:44.400429 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.400440 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.400451 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.400462 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.400472 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.400483 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.400494 | orchestrator | 2025-06-01 23:08:44.400504 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-01 23:08:44.400515 | orchestrator | Sunday 01 June 2025 23:05:09 +0000 (0:00:00.868) 0:01:03.805 *********** 2025-06-01 23:08:44.400526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.400537 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.400549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.400560 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.400576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.400594 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.400612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.400624 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.400634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.400646 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.400656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.400668 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.400678 | orchestrator | 2025-06-01 23:08:44.400689 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-01 23:08:44.400700 | orchestrator | Sunday 01 June 2025 23:05:12 +0000 (0:00:03.187) 0:01:06.992 *********** 2025-06-01 23:08:44.400711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.400794 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.400816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.400827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.400837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.400848 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.400863 | orchestrator | 2025-06-01 23:08:44.400873 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-01 23:08:44.400882 | orchestrator | Sunday 01 June 2025 23:05:17 +0000 (0:00:05.540) 0:01:12.533 *********** 2025-06-01 23:08:44.400904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.400915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.400925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.400936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.400950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.400972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.400982 | orchestrator | 2025-06-01 23:08:44.400992 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-01 23:08:44.401002 | orchestrator | Sunday 01 June 2025 23:05:26 +0000 (0:00:09.042) 0:01:21.575 *********** 2025-06-01 23:08:44.401012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.401021 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.401031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.401042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.401061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.401072 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.401090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.401100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.401110 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.401120 | orchestrator | 2025-06-01 23:08:44.401130 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-01 23:08:44.401139 | orchestrator | Sunday 01 June 2025 23:05:29 +0000 (0:00:02.873) 0:01:24.449 *********** 2025-06-01 23:08:44.401149 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.401159 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.401168 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:44.401178 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:08:44.401187 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:08:44.401197 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.401206 | orchestrator | 2025-06-01 23:08:44.401216 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-01 23:08:44.401225 | orchestrator | Sunday 01 June 2025 23:05:32 +0000 (0:00:03.128) 0:01:27.578 *********** 2025-06-01 23:08:44.401235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.401251 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.401265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.401275 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.401292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.401303 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.401312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.401323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.401339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.401349 | orchestrator | 2025-06-01 23:08:44.401359 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-01 23:08:44.401368 | orchestrator | Sunday 01 June 2025 23:05:37 +0000 (0:00:04.828) 0:01:32.407 *********** 2025-06-01 23:08:44.401378 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.401387 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.401397 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.401406 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.401415 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.401425 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.401434 | orchestrator | 2025-06-01 23:08:44.401444 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-01 23:08:44.401461 | orchestrator | Sunday 01 June 2025 23:05:40 +0000 (0:00:02.616) 0:01:35.023 *********** 2025-06-01 23:08:44.401471 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.401481 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.401490 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.401499 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.401509 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.401518 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.401528 | orchestrator | 2025-06-01 23:08:44.401537 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-01 23:08:44.401547 | orchestrator | Sunday 01 June 2025 23:05:42 +0000 (0:00:02.512) 0:01:37.536 *********** 2025-06-01 23:08:44.401557 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.401566 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.401576 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.401591 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.401600 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.401610 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.401619 | orchestrator | 2025-06-01 23:08:44.401629 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-01 23:08:44.401639 | orchestrator | Sunday 01 June 2025 23:05:45 +0000 (0:00:02.400) 0:01:39.937 *********** 2025-06-01 23:08:44.401648 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.401657 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.401667 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.401676 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.401686 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.401695 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.401704 | orchestrator | 2025-06-01 23:08:44.401714 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-01 23:08:44.401741 | orchestrator | Sunday 01 June 2025 23:05:49 +0000 (0:00:03.963) 0:01:43.900 *********** 2025-06-01 23:08:44.401757 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.401767 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.401776 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.401786 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.401795 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.401805 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.401814 | orchestrator | 2025-06-01 23:08:44.401824 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-01 23:08:44.401833 | orchestrator | Sunday 01 June 2025 23:05:51 +0000 (0:00:02.649) 0:01:46.550 *********** 2025-06-01 23:08:44.401843 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.401852 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.401862 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.401871 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.401880 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.401890 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.401899 | orchestrator | 2025-06-01 23:08:44.401909 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-01 23:08:44.401918 | orchestrator | Sunday 01 June 2025 23:05:54 +0000 (0:00:02.500) 0:01:49.050 *********** 2025-06-01 23:08:44.401928 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-01 23:08:44.401937 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.401947 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-01 23:08:44.401957 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.401966 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-01 23:08:44.401976 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.401986 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-01 23:08:44.401995 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.402005 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-01 23:08:44.402199 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.402231 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-01 23:08:44.402249 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.402260 | orchestrator | 2025-06-01 23:08:44.402270 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-01 23:08:44.402279 | orchestrator | Sunday 01 June 2025 23:05:57 +0000 (0:00:02.863) 0:01:51.913 *********** 2025-06-01 23:08:44.402290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.402301 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.402325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.402345 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.402355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.402366 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.402375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.402385 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.402395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.402405 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.402420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.402437 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.402446 | orchestrator | 2025-06-01 23:08:44.402456 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-01 23:08:44.402466 | orchestrator | Sunday 01 June 2025 23:06:01 +0000 (0:00:04.677) 0:01:56.591 *********** 2025-06-01 23:08:44.402484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.402494 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.402504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.402514 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.402523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.402533 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.402543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.402560 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.402575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.402585 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.402601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.402611 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.402621 | orchestrator | 2025-06-01 23:08:44.402631 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-01 23:08:44.402641 | orchestrator | Sunday 01 June 2025 23:06:03 +0000 (0:00:02.026) 0:01:58.617 *********** 2025-06-01 23:08:44.402650 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.402660 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.402670 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.402679 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.402689 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.402698 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.402708 | orchestrator | 2025-06-01 23:08:44.402717 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-01 23:08:44.402746 | orchestrator | Sunday 01 June 2025 23:06:07 +0000 (0:00:03.294) 0:02:01.911 *********** 2025-06-01 23:08:44.402756 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.402765 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.402775 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.402784 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:08:44.402794 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:08:44.402803 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:08:44.402813 | orchestrator | 2025-06-01 23:08:44.402837 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-01 23:08:44.402849 | orchestrator | Sunday 01 June 2025 23:06:13 +0000 (0:00:06.005) 0:02:07.917 *********** 2025-06-01 23:08:44.402870 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.402881 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.402892 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.402903 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.402914 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.402925 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.402936 | orchestrator | 2025-06-01 23:08:44.402947 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-01 23:08:44.402958 | orchestrator | Sunday 01 June 2025 23:06:18 +0000 (0:00:04.809) 0:02:12.727 *********** 2025-06-01 23:08:44.402977 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.402988 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.403000 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.403010 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.403021 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.403032 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.403044 | orchestrator | 2025-06-01 23:08:44.403055 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-01 23:08:44.403065 | orchestrator | Sunday 01 June 2025 23:06:21 +0000 (0:00:03.243) 0:02:15.970 *********** 2025-06-01 23:08:44.403074 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.403084 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.403093 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.403103 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.403112 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.403122 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.403131 | orchestrator | 2025-06-01 23:08:44.403141 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-01 23:08:44.403150 | orchestrator | Sunday 01 June 2025 23:06:23 +0000 (0:00:02.358) 0:02:18.328 *********** 2025-06-01 23:08:44.403160 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.403169 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.403179 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.403188 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.403198 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.403207 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.403217 | orchestrator | 2025-06-01 23:08:44.403226 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-01 23:08:44.403236 | orchestrator | Sunday 01 June 2025 23:06:27 +0000 (0:00:03.802) 0:02:22.131 *********** 2025-06-01 23:08:44.403245 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.403255 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.403264 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.403274 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.403283 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.403297 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.403307 | orchestrator | 2025-06-01 23:08:44.403316 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-01 23:08:44.403326 | orchestrator | Sunday 01 June 2025 23:06:31 +0000 (0:00:04.081) 0:02:26.213 *********** 2025-06-01 23:08:44.403335 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.403345 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.403354 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.403364 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.403373 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.403382 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.403392 | orchestrator | 2025-06-01 23:08:44.403401 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-01 23:08:44.403411 | orchestrator | Sunday 01 June 2025 23:06:35 +0000 (0:00:03.981) 0:02:30.194 *********** 2025-06-01 23:08:44.403421 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.403435 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.403445 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.403455 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.403464 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.403474 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.403483 | orchestrator | 2025-06-01 23:08:44.403493 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-01 23:08:44.403502 | orchestrator | Sunday 01 June 2025 23:06:39 +0000 (0:00:03.675) 0:02:33.869 *********** 2025-06-01 23:08:44.403512 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.403521 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.403531 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.403547 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.403557 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.403566 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.403575 | orchestrator | 2025-06-01 23:08:44.403585 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-01 23:08:44.403595 | orchestrator | Sunday 01 June 2025 23:06:42 +0000 (0:00:03.358) 0:02:37.228 *********** 2025-06-01 23:08:44.403604 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-01 23:08:44.403614 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.403623 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-01 23:08:44.403633 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.403643 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-01 23:08:44.403652 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.403662 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-01 23:08:44.403672 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.403681 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-01 23:08:44.403691 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.403700 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-01 23:08:44.403710 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.403720 | orchestrator | 2025-06-01 23:08:44.403744 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-01 23:08:44.403754 | orchestrator | Sunday 01 June 2025 23:06:45 +0000 (0:00:03.210) 0:02:40.438 *********** 2025-06-01 23:08:44.403765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.403775 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.403790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.403800 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.403816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:08:44.403837 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.403848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.403857 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.403867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.403877 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.403887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:08:44.403897 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.403906 | orchestrator | 2025-06-01 23:08:44.403916 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-01 23:08:44.403926 | orchestrator | Sunday 01 June 2025 23:06:50 +0000 (0:00:04.715) 0:02:45.154 *********** 2025-06-01 23:08:44.403940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.403962 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.403973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.403984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.403994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:08:44.404012 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-01 23:08:44.404030 | orchestrator | 2025-06-01 23:08:44.404041 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-01 23:08:44.404055 | orchestrator | Sunday 01 June 2025 23:06:55 +0000 (0:00:04.899) 0:02:50.053 *********** 2025-06-01 23:08:44.404065 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:44.404074 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:44.404084 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:44.404093 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:08:44.404103 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:08:44.404112 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:08:44.404122 | orchestrator | 2025-06-01 23:08:44.404132 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-01 23:08:44.404141 | orchestrator | Sunday 01 June 2025 23:06:56 +0000 (0:00:00.957) 0:02:51.011 *********** 2025-06-01 23:08:44.404151 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:44.404160 | orchestrator | 2025-06-01 23:08:44.404170 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-01 23:08:44.404179 | orchestrator | Sunday 01 June 2025 23:06:58 +0000 (0:00:02.001) 0:02:53.013 *********** 2025-06-01 23:08:44.404189 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:44.404198 | orchestrator | 2025-06-01 23:08:44.404208 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-01 23:08:44.404217 | orchestrator | Sunday 01 June 2025 23:07:00 +0000 (0:00:02.190) 0:02:55.203 *********** 2025-06-01 23:08:44.404227 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:44.404236 | orchestrator | 2025-06-01 23:08:44.404246 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-01 23:08:44.404255 | orchestrator | Sunday 01 June 2025 23:07:44 +0000 (0:00:44.288) 0:03:39.491 *********** 2025-06-01 23:08:44.404265 | orchestrator | 2025-06-01 23:08:44.404274 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-01 23:08:44.404284 | orchestrator | Sunday 01 June 2025 23:07:44 +0000 (0:00:00.087) 0:03:39.579 *********** 2025-06-01 23:08:44.404293 | orchestrator | 2025-06-01 23:08:44.404303 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-01 23:08:44.404312 | orchestrator | Sunday 01 June 2025 23:07:45 +0000 (0:00:00.346) 0:03:39.925 *********** 2025-06-01 23:08:44.404322 | orchestrator | 2025-06-01 23:08:44.404331 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-01 23:08:44.404341 | orchestrator | Sunday 01 June 2025 23:07:45 +0000 (0:00:00.069) 0:03:39.994 *********** 2025-06-01 23:08:44.404350 | orchestrator | 2025-06-01 23:08:44.404360 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-01 23:08:44.404369 | orchestrator | Sunday 01 June 2025 23:07:45 +0000 (0:00:00.076) 0:03:40.071 *********** 2025-06-01 23:08:44.404379 | orchestrator | 2025-06-01 23:08:44.404388 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-01 23:08:44.404398 | orchestrator | Sunday 01 June 2025 23:07:45 +0000 (0:00:00.075) 0:03:40.147 *********** 2025-06-01 23:08:44.404407 | orchestrator | 2025-06-01 23:08:44.404416 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-01 23:08:44.404426 | orchestrator | Sunday 01 June 2025 23:07:45 +0000 (0:00:00.065) 0:03:40.212 *********** 2025-06-01 23:08:44.404436 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:44.404451 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:08:44.404461 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:08:44.404471 | orchestrator | 2025-06-01 23:08:44.404481 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-01 23:08:44.404490 | orchestrator | Sunday 01 June 2025 23:08:11 +0000 (0:00:26.001) 0:04:06.214 *********** 2025-06-01 23:08:44.404500 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:08:44.404509 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:08:44.404519 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:08:44.404529 | orchestrator | 2025-06-01 23:08:44.404538 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:08:44.404548 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-01 23:08:44.404559 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-01 23:08:44.404569 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-01 23:08:44.404579 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-01 23:08:44.404589 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-01 23:08:44.404598 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-01 23:08:44.404608 | orchestrator | 2025-06-01 23:08:44.404617 | orchestrator | 2025-06-01 23:08:44.404631 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:08:44.404641 | orchestrator | Sunday 01 June 2025 23:08:42 +0000 (0:00:30.670) 0:04:36.884 *********** 2025-06-01 23:08:44.404651 | orchestrator | =============================================================================== 2025-06-01 23:08:44.404660 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.29s 2025-06-01 23:08:44.404670 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 30.67s 2025-06-01 23:08:44.404679 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.00s 2025-06-01 23:08:44.404689 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 9.04s 2025-06-01 23:08:44.404704 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.23s 2025-06-01 23:08:44.404714 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.12s 2025-06-01 23:08:44.404751 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 6.01s 2025-06-01 23:08:44.404762 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.54s 2025-06-01 23:08:44.404771 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 5.03s 2025-06-01 23:08:44.404781 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.92s 2025-06-01 23:08:44.404790 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.90s 2025-06-01 23:08:44.404800 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.83s 2025-06-01 23:08:44.404809 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 4.81s 2025-06-01 23:08:44.404819 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 4.72s 2025-06-01 23:08:44.404829 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 4.68s 2025-06-01 23:08:44.404838 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 4.08s 2025-06-01 23:08:44.404848 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.98s 2025-06-01 23:08:44.404864 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 3.96s 2025-06-01 23:08:44.404874 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.80s 2025-06-01 23:08:44.404883 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.71s 2025-06-01 23:08:44.404893 | orchestrator | 2025-06-01 23:08:44 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:08:44.404902 | orchestrator | 2025-06-01 23:08:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:47.463086 | orchestrator | 2025-06-01 23:08:47 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:47.464500 | orchestrator | 2025-06-01 23:08:47 | INFO  | Task e2d156ee-7c68-487b-9423-f704c1f2ec53 is in state SUCCESS 2025-06-01 23:08:47.467398 | orchestrator | 2025-06-01 23:08:47.467460 | orchestrator | 2025-06-01 23:08:47.467481 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:08:47.467494 | orchestrator | 2025-06-01 23:08:47.467505 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:08:47.467517 | orchestrator | Sunday 01 June 2025 23:05:35 +0000 (0:00:00.530) 0:00:00.530 *********** 2025-06-01 23:08:47.467530 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:08:47.467542 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:08:47.467562 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:08:47.467581 | orchestrator | 2025-06-01 23:08:47.467965 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:08:47.467989 | orchestrator | Sunday 01 June 2025 23:05:35 +0000 (0:00:00.322) 0:00:00.853 *********** 2025-06-01 23:08:47.468350 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-01 23:08:47.468377 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-01 23:08:47.468388 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-01 23:08:47.468399 | orchestrator | 2025-06-01 23:08:47.468410 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-01 23:08:47.468421 | orchestrator | 2025-06-01 23:08:47.468432 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-01 23:08:47.468443 | orchestrator | Sunday 01 June 2025 23:05:36 +0000 (0:00:00.629) 0:00:01.483 *********** 2025-06-01 23:08:47.468454 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:08:47.468466 | orchestrator | 2025-06-01 23:08:47.468476 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-01 23:08:47.468487 | orchestrator | Sunday 01 June 2025 23:05:37 +0000 (0:00:00.957) 0:00:02.440 *********** 2025-06-01 23:08:47.468498 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-01 23:08:47.468509 | orchestrator | 2025-06-01 23:08:47.468519 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-01 23:08:47.468530 | orchestrator | Sunday 01 June 2025 23:05:40 +0000 (0:00:03.531) 0:00:05.972 *********** 2025-06-01 23:08:47.468541 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-01 23:08:47.468552 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-01 23:08:47.468563 | orchestrator | 2025-06-01 23:08:47.468574 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-01 23:08:47.468602 | orchestrator | Sunday 01 June 2025 23:05:47 +0000 (0:00:06.368) 0:00:12.340 *********** 2025-06-01 23:08:47.468614 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 23:08:47.468624 | orchestrator | 2025-06-01 23:08:47.468635 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-01 23:08:47.468646 | orchestrator | Sunday 01 June 2025 23:05:50 +0000 (0:00:03.347) 0:00:15.688 *********** 2025-06-01 23:08:47.468658 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 23:08:47.468690 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-01 23:08:47.468702 | orchestrator | 2025-06-01 23:08:47.468712 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-01 23:08:47.468767 | orchestrator | Sunday 01 June 2025 23:05:53 +0000 (0:00:03.291) 0:00:18.980 *********** 2025-06-01 23:08:47.468779 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 23:08:47.468790 | orchestrator | 2025-06-01 23:08:47.468800 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-01 23:08:47.468811 | orchestrator | Sunday 01 June 2025 23:05:56 +0000 (0:00:03.093) 0:00:22.074 *********** 2025-06-01 23:08:47.468822 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-01 23:08:47.468832 | orchestrator | 2025-06-01 23:08:47.468843 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-01 23:08:47.468854 | orchestrator | Sunday 01 June 2025 23:06:01 +0000 (0:00:04.229) 0:00:26.303 *********** 2025-06-01 23:08:47.468868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.469005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.469034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.469102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469368 | orchestrator | 2025-06-01 23:08:47.469380 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-01 23:08:47.469391 | orchestrator | Sunday 01 June 2025 23:06:04 +0000 (0:00:03.183) 0:00:29.486 *********** 2025-06-01 23:08:47.469402 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:47.469413 | orchestrator | 2025-06-01 23:08:47.469424 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-01 23:08:47.469435 | orchestrator | Sunday 01 June 2025 23:06:04 +0000 (0:00:00.187) 0:00:29.673 *********** 2025-06-01 23:08:47.469446 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:47.469457 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:47.469468 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:47.469479 | orchestrator | 2025-06-01 23:08:47.469490 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-01 23:08:47.469501 | orchestrator | Sunday 01 June 2025 23:06:05 +0000 (0:00:00.815) 0:00:30.489 *********** 2025-06-01 23:08:47.469512 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:08:47.469523 | orchestrator | 2025-06-01 23:08:47.469534 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-01 23:08:47.469545 | orchestrator | Sunday 01 June 2025 23:06:06 +0000 (0:00:01.505) 0:00:31.995 *********** 2025-06-01 23:08:47.469586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.469603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.469630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.469645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.469993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470303 | orchestrator | 2025-06-01 23:08:47.470308 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-01 23:08:47.470314 | orchestrator | Sunday 01 June 2025 23:06:13 +0000 (0:00:06.159) 0:00:38.154 *********** 2025-06-01 23:08:47.470321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.470343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:08:47.470353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.470365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:08:47.470373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470410 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:47.470418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470426 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:47.470430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.470453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:08:47.470459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470478 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:47.470483 | orchestrator | 2025-06-01 23:08:47.470487 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-01 23:08:47.470491 | orchestrator | Sunday 01 June 2025 23:06:14 +0000 (0:00:01.811) 0:00:39.966 *********** 2025-06-01 23:08:47.470495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.470515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:08:47.470520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470644 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:47.470649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.470673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:08:47.470678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470698 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:47.470702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.470750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:08:47.470756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.470778 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:47.470787 | orchestrator | 2025-06-01 23:08:47.470791 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-01 23:08:47.470795 | orchestrator | Sunday 01 June 2025 23:06:17 +0000 (0:00:02.498) 0:00:42.464 *********** 2025-06-01 23:08:47.470799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.470818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.470823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.470830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.470942 | orchestrator | 2025-06-01 23:08:47.470947 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-01 23:08:47.470951 | orchestrator | Sunday 01 June 2025 23:06:23 +0000 (0:00:06.567) 0:00:49.031 *********** 2025-06-01 23:08:47.470955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.470976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.470984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.470994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471117 | orchestrator | 2025-06-01 23:08:47.471121 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-01 23:08:47.471125 | orchestrator | Sunday 01 June 2025 23:06:45 +0000 (0:00:21.949) 0:01:10.981 *********** 2025-06-01 23:08:47.471130 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-01 23:08:47.471134 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-01 23:08:47.471138 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-01 23:08:47.471142 | orchestrator | 2025-06-01 23:08:47.471146 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-01 23:08:47.471150 | orchestrator | Sunday 01 June 2025 23:06:53 +0000 (0:00:07.773) 0:01:18.754 *********** 2025-06-01 23:08:47.471154 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-01 23:08:47.471158 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-01 23:08:47.471162 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-01 23:08:47.471166 | orchestrator | 2025-06-01 23:08:47.471170 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-01 23:08:47.471174 | orchestrator | Sunday 01 June 2025 23:06:57 +0000 (0:00:03.774) 0:01:22.529 *********** 2025-06-01 23:08:47.471183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.471188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.471201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.471205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471295 | orchestrator | 2025-06-01 23:08:47.471300 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-01 23:08:47.471305 | orchestrator | Sunday 01 June 2025 23:07:00 +0000 (0:00:02.916) 0:01:25.446 *********** 2025-06-01 23:08:47.471314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.471319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.471331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.471336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471435 | orchestrator | 2025-06-01 23:08:47.471439 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-01 23:08:47.471444 | orchestrator | Sunday 01 June 2025 23:07:03 +0000 (0:00:03.498) 0:01:28.944 *********** 2025-06-01 23:08:47.471448 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:47.471453 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:47.471458 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:47.471462 | orchestrator | 2025-06-01 23:08:47.471467 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-01 23:08:47.471471 | orchestrator | Sunday 01 June 2025 23:07:04 +0000 (0:00:00.835) 0:01:29.780 *********** 2025-06-01 23:08:47.471480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.471489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:08:47.471494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471516 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:47.471524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.471533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:08:47.471541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:08:47.471546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:08:47.471555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471590 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:47.471595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:08:47.471604 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:47.471612 | orchestrator | 2025-06-01 23:08:47.471617 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-01 23:08:47.471621 | orchestrator | Sunday 01 June 2025 23:07:05 +0000 (0:00:00.895) 0:01:30.676 *********** 2025-06-01 23:08:47.471631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.471636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.471641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:08:47.471645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:08:47.471789 | orchestrator | 2025-06-01 23:08:47.471793 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-01 23:08:47.471798 | orchestrator | Sunday 01 June 2025 23:07:09 +0000 (0:00:03.968) 0:01:34.645 *********** 2025-06-01 23:08:47.471802 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:08:47.471806 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:08:47.471810 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:08:47.471813 | orchestrator | 2025-06-01 23:08:47.471817 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-01 23:08:47.471821 | orchestrator | Sunday 01 June 2025 23:07:09 +0000 (0:00:00.286) 0:01:34.931 *********** 2025-06-01 23:08:47.471826 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-01 23:08:47.471831 | orchestrator | 2025-06-01 23:08:47.471835 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-01 23:08:47.471838 | orchestrator | Sunday 01 June 2025 23:07:12 +0000 (0:00:02.639) 0:01:37.571 *********** 2025-06-01 23:08:47.471843 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 23:08:47.471847 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-01 23:08:47.471851 | orchestrator | 2025-06-01 23:08:47.471855 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-01 23:08:47.471862 | orchestrator | Sunday 01 June 2025 23:07:14 +0000 (0:00:02.391) 0:01:39.962 *********** 2025-06-01 23:08:47.471866 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:47.471870 | orchestrator | 2025-06-01 23:08:47.471874 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-01 23:08:47.471878 | orchestrator | Sunday 01 June 2025 23:07:32 +0000 (0:00:17.229) 0:01:57.192 *********** 2025-06-01 23:08:47.471882 | orchestrator | 2025-06-01 23:08:47.471886 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-01 23:08:47.471890 | orchestrator | Sunday 01 June 2025 23:07:32 +0000 (0:00:00.067) 0:01:57.259 *********** 2025-06-01 23:08:47.471894 | orchestrator | 2025-06-01 23:08:47.471898 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-01 23:08:47.471902 | orchestrator | Sunday 01 June 2025 23:07:32 +0000 (0:00:00.066) 0:01:57.325 *********** 2025-06-01 23:08:47.471906 | orchestrator | 2025-06-01 23:08:47.471910 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-01 23:08:47.471914 | orchestrator | Sunday 01 June 2025 23:07:32 +0000 (0:00:00.067) 0:01:57.393 *********** 2025-06-01 23:08:47.471918 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:08:47.471922 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:47.471926 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:08:47.471930 | orchestrator | 2025-06-01 23:08:47.471934 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-01 23:08:47.471938 | orchestrator | Sunday 01 June 2025 23:07:45 +0000 (0:00:13.321) 0:02:10.715 *********** 2025-06-01 23:08:47.471942 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:08:47.471946 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:08:47.471950 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:47.471953 | orchestrator | 2025-06-01 23:08:47.471957 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-01 23:08:47.471961 | orchestrator | Sunday 01 June 2025 23:07:56 +0000 (0:00:10.752) 0:02:21.467 *********** 2025-06-01 23:08:47.471965 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:47.471969 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:08:47.471973 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:08:47.471981 | orchestrator | 2025-06-01 23:08:47.471985 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-01 23:08:47.471989 | orchestrator | Sunday 01 June 2025 23:08:07 +0000 (0:00:11.389) 0:02:32.857 *********** 2025-06-01 23:08:47.471993 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:47.471997 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:08:47.472001 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:08:47.472005 | orchestrator | 2025-06-01 23:08:47.472009 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-01 23:08:47.472016 | orchestrator | Sunday 01 June 2025 23:08:13 +0000 (0:00:06.015) 0:02:38.872 *********** 2025-06-01 23:08:47.472020 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:47.472024 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:08:47.472028 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:08:47.472032 | orchestrator | 2025-06-01 23:08:47.472036 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-01 23:08:47.472040 | orchestrator | Sunday 01 June 2025 23:08:25 +0000 (0:00:12.106) 0:02:50.979 *********** 2025-06-01 23:08:47.472044 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:08:47.472048 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:47.472052 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:08:47.472056 | orchestrator | 2025-06-01 23:08:47.472060 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-01 23:08:47.472064 | orchestrator | Sunday 01 June 2025 23:08:38 +0000 (0:00:12.757) 0:03:03.737 *********** 2025-06-01 23:08:47.472068 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:08:47.472072 | orchestrator | 2025-06-01 23:08:47.472076 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:08:47.472081 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-01 23:08:47.472088 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 23:08:47.472094 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 23:08:47.472101 | orchestrator | 2025-06-01 23:08:47.472107 | orchestrator | 2025-06-01 23:08:47.472113 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:08:47.472119 | orchestrator | Sunday 01 June 2025 23:08:46 +0000 (0:00:07.738) 0:03:11.475 *********** 2025-06-01 23:08:47.472125 | orchestrator | =============================================================================== 2025-06-01 23:08:47.472132 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.95s 2025-06-01 23:08:47.472138 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.23s 2025-06-01 23:08:47.472144 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.32s 2025-06-01 23:08:47.472150 | orchestrator | designate : Restart designate-worker container ------------------------- 12.76s 2025-06-01 23:08:47.472157 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.11s 2025-06-01 23:08:47.472163 | orchestrator | designate : Restart designate-central container ------------------------ 11.39s 2025-06-01 23:08:47.472169 | orchestrator | designate : Restart designate-api container ---------------------------- 10.75s 2025-06-01 23:08:47.472176 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.77s 2025-06-01 23:08:47.472183 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.74s 2025-06-01 23:08:47.472189 | orchestrator | designate : Copying over config.json files for services ----------------- 6.57s 2025-06-01 23:08:47.472200 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.37s 2025-06-01 23:08:47.472204 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.16s 2025-06-01 23:08:47.472208 | orchestrator | designate : Restart designate-producer container ------------------------ 6.02s 2025-06-01 23:08:47.472220 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.23s 2025-06-01 23:08:47.472226 | orchestrator | designate : Check designate containers ---------------------------------- 3.97s 2025-06-01 23:08:47.472232 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.77s 2025-06-01 23:08:47.472238 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.53s 2025-06-01 23:08:47.472244 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.50s 2025-06-01 23:08:47.472250 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.35s 2025-06-01 23:08:47.472256 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.29s 2025-06-01 23:08:47.472263 | orchestrator | 2025-06-01 23:08:47 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:47.472269 | orchestrator | 2025-06-01 23:08:47 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:08:47.472276 | orchestrator | 2025-06-01 23:08:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:50.526218 | orchestrator | 2025-06-01 23:08:50 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:50.526793 | orchestrator | 2025-06-01 23:08:50 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:50.527454 | orchestrator | 2025-06-01 23:08:50 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:08:50.528435 | orchestrator | 2025-06-01 23:08:50 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:08:50.528464 | orchestrator | 2025-06-01 23:08:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:53.570345 | orchestrator | 2025-06-01 23:08:53 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:53.573793 | orchestrator | 2025-06-01 23:08:53 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:53.576412 | orchestrator | 2025-06-01 23:08:53 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:08:53.581693 | orchestrator | 2025-06-01 23:08:53 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:08:53.581779 | orchestrator | 2025-06-01 23:08:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:56.629701 | orchestrator | 2025-06-01 23:08:56 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:56.632636 | orchestrator | 2025-06-01 23:08:56 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:56.637560 | orchestrator | 2025-06-01 23:08:56 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:08:56.640232 | orchestrator | 2025-06-01 23:08:56 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:08:56.640275 | orchestrator | 2025-06-01 23:08:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:08:59.698004 | orchestrator | 2025-06-01 23:08:59 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:08:59.699835 | orchestrator | 2025-06-01 23:08:59 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:08:59.701514 | orchestrator | 2025-06-01 23:08:59 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:08:59.703819 | orchestrator | 2025-06-01 23:08:59 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:08:59.703933 | orchestrator | 2025-06-01 23:08:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:02.763095 | orchestrator | 2025-06-01 23:09:02 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:02.763202 | orchestrator | 2025-06-01 23:09:02 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:02.765518 | orchestrator | 2025-06-01 23:09:02 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:02.766417 | orchestrator | 2025-06-01 23:09:02 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:02.766441 | orchestrator | 2025-06-01 23:09:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:05.814966 | orchestrator | 2025-06-01 23:09:05 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:05.818489 | orchestrator | 2025-06-01 23:09:05 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:05.823377 | orchestrator | 2025-06-01 23:09:05 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:05.826468 | orchestrator | 2025-06-01 23:09:05 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:05.826562 | orchestrator | 2025-06-01 23:09:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:08.888542 | orchestrator | 2025-06-01 23:09:08 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:08.890289 | orchestrator | 2025-06-01 23:09:08 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:08.891597 | orchestrator | 2025-06-01 23:09:08 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:08.893189 | orchestrator | 2025-06-01 23:09:08 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:08.893218 | orchestrator | 2025-06-01 23:09:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:11.947492 | orchestrator | 2025-06-01 23:09:11 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:11.947602 | orchestrator | 2025-06-01 23:09:11 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:11.948652 | orchestrator | 2025-06-01 23:09:11 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:11.950136 | orchestrator | 2025-06-01 23:09:11 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:11.950169 | orchestrator | 2025-06-01 23:09:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:14.995566 | orchestrator | 2025-06-01 23:09:14 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:14.996313 | orchestrator | 2025-06-01 23:09:14 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:14.997121 | orchestrator | 2025-06-01 23:09:14 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:15.002098 | orchestrator | 2025-06-01 23:09:15 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:15.002132 | orchestrator | 2025-06-01 23:09:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:18.052550 | orchestrator | 2025-06-01 23:09:18 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:18.053463 | orchestrator | 2025-06-01 23:09:18 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:18.054507 | orchestrator | 2025-06-01 23:09:18 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:18.055865 | orchestrator | 2025-06-01 23:09:18 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:18.055891 | orchestrator | 2025-06-01 23:09:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:21.116618 | orchestrator | 2025-06-01 23:09:21 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:21.120418 | orchestrator | 2025-06-01 23:09:21 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:21.120460 | orchestrator | 2025-06-01 23:09:21 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:21.122181 | orchestrator | 2025-06-01 23:09:21 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:21.122209 | orchestrator | 2025-06-01 23:09:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:24.164577 | orchestrator | 2025-06-01 23:09:24 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:24.165143 | orchestrator | 2025-06-01 23:09:24 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:24.166502 | orchestrator | 2025-06-01 23:09:24 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:24.167575 | orchestrator | 2025-06-01 23:09:24 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:24.167598 | orchestrator | 2025-06-01 23:09:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:27.219912 | orchestrator | 2025-06-01 23:09:27 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:27.220617 | orchestrator | 2025-06-01 23:09:27 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:27.221622 | orchestrator | 2025-06-01 23:09:27 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:27.223187 | orchestrator | 2025-06-01 23:09:27 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:27.223216 | orchestrator | 2025-06-01 23:09:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:30.271034 | orchestrator | 2025-06-01 23:09:30 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:30.272421 | orchestrator | 2025-06-01 23:09:30 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:30.274519 | orchestrator | 2025-06-01 23:09:30 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:30.279035 | orchestrator | 2025-06-01 23:09:30 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:30.279060 | orchestrator | 2025-06-01 23:09:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:33.327552 | orchestrator | 2025-06-01 23:09:33 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:33.331954 | orchestrator | 2025-06-01 23:09:33 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:33.334630 | orchestrator | 2025-06-01 23:09:33 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:33.336303 | orchestrator | 2025-06-01 23:09:33 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:33.336330 | orchestrator | 2025-06-01 23:09:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:36.394881 | orchestrator | 2025-06-01 23:09:36 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:36.394986 | orchestrator | 2025-06-01 23:09:36 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:36.395046 | orchestrator | 2025-06-01 23:09:36 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:36.396390 | orchestrator | 2025-06-01 23:09:36 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:36.396422 | orchestrator | 2025-06-01 23:09:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:39.481262 | orchestrator | 2025-06-01 23:09:39 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:39.483724 | orchestrator | 2025-06-01 23:09:39 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:39.484726 | orchestrator | 2025-06-01 23:09:39 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:39.485704 | orchestrator | 2025-06-01 23:09:39 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:39.485830 | orchestrator | 2025-06-01 23:09:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:42.553225 | orchestrator | 2025-06-01 23:09:42 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:42.555249 | orchestrator | 2025-06-01 23:09:42 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:42.561594 | orchestrator | 2025-06-01 23:09:42 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:42.562362 | orchestrator | 2025-06-01 23:09:42 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:42.562409 | orchestrator | 2025-06-01 23:09:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:45.660251 | orchestrator | 2025-06-01 23:09:45 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:45.660554 | orchestrator | 2025-06-01 23:09:45 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:45.661610 | orchestrator | 2025-06-01 23:09:45 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:45.662893 | orchestrator | 2025-06-01 23:09:45 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:45.662942 | orchestrator | 2025-06-01 23:09:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:48.732929 | orchestrator | 2025-06-01 23:09:48 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:48.733897 | orchestrator | 2025-06-01 23:09:48 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:48.734531 | orchestrator | 2025-06-01 23:09:48 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:48.735462 | orchestrator | 2025-06-01 23:09:48 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:48.735487 | orchestrator | 2025-06-01 23:09:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:51.783321 | orchestrator | 2025-06-01 23:09:51 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:51.785455 | orchestrator | 2025-06-01 23:09:51 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:51.787414 | orchestrator | 2025-06-01 23:09:51 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state STARTED 2025-06-01 23:09:51.789101 | orchestrator | 2025-06-01 23:09:51 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:51.789599 | orchestrator | 2025-06-01 23:09:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:54.845720 | orchestrator | 2025-06-01 23:09:54 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:54.847866 | orchestrator | 2025-06-01 23:09:54 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:09:54.849706 | orchestrator | 2025-06-01 23:09:54 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:54.857628 | orchestrator | 2025-06-01 23:09:54 | INFO  | Task 4bc0fdfb-9a1a-40eb-a25a-b10559955c87 is in state SUCCESS 2025-06-01 23:09:54.857667 | orchestrator | 2025-06-01 23:09:54 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:54.857680 | orchestrator | 2025-06-01 23:09:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:09:54.859908 | orchestrator | 2025-06-01 23:09:54.859939 | orchestrator | 2025-06-01 23:09:54.859951 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:09:54.859962 | orchestrator | 2025-06-01 23:09:54.859973 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:09:54.860001 | orchestrator | Sunday 01 June 2025 23:08:47 +0000 (0:00:00.269) 0:00:00.269 *********** 2025-06-01 23:09:54.860013 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:09:54.860025 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:09:54.860036 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:09:54.860046 | orchestrator | 2025-06-01 23:09:54.860057 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:09:54.860068 | orchestrator | Sunday 01 June 2025 23:08:47 +0000 (0:00:00.295) 0:00:00.564 *********** 2025-06-01 23:09:54.860080 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-01 23:09:54.860092 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-01 23:09:54.860103 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-01 23:09:54.860114 | orchestrator | 2025-06-01 23:09:54.860125 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-01 23:09:54.860136 | orchestrator | 2025-06-01 23:09:54.860146 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-01 23:09:54.860157 | orchestrator | Sunday 01 June 2025 23:08:47 +0000 (0:00:00.467) 0:00:01.031 *********** 2025-06-01 23:09:54.860168 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:09:54.860179 | orchestrator | 2025-06-01 23:09:54.860190 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-01 23:09:54.860201 | orchestrator | Sunday 01 June 2025 23:08:48 +0000 (0:00:00.542) 0:00:01.573 *********** 2025-06-01 23:09:54.860212 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-01 23:09:54.860222 | orchestrator | 2025-06-01 23:09:54.860233 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-01 23:09:54.860244 | orchestrator | Sunday 01 June 2025 23:08:51 +0000 (0:00:03.251) 0:00:04.825 *********** 2025-06-01 23:09:54.860254 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-01 23:09:54.860266 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-01 23:09:54.860277 | orchestrator | 2025-06-01 23:09:54.860288 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-01 23:09:54.860299 | orchestrator | Sunday 01 June 2025 23:08:57 +0000 (0:00:06.313) 0:00:11.138 *********** 2025-06-01 23:09:54.860310 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 23:09:54.860321 | orchestrator | 2025-06-01 23:09:54.860332 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-01 23:09:54.860342 | orchestrator | Sunday 01 June 2025 23:09:01 +0000 (0:00:03.283) 0:00:14.422 *********** 2025-06-01 23:09:54.860353 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 23:09:54.860385 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-01 23:09:54.860396 | orchestrator | 2025-06-01 23:09:54.860407 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-01 23:09:54.860418 | orchestrator | Sunday 01 June 2025 23:09:05 +0000 (0:00:03.842) 0:00:18.265 *********** 2025-06-01 23:09:54.860428 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 23:09:54.860439 | orchestrator | 2025-06-01 23:09:54.860450 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-01 23:09:54.860461 | orchestrator | Sunday 01 June 2025 23:09:08 +0000 (0:00:03.066) 0:00:21.331 *********** 2025-06-01 23:09:54.860472 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-01 23:09:54.860482 | orchestrator | 2025-06-01 23:09:54.860493 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-01 23:09:54.860504 | orchestrator | Sunday 01 June 2025 23:09:12 +0000 (0:00:04.065) 0:00:25.397 *********** 2025-06-01 23:09:54.860515 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:09:54.860526 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:09:54.860536 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:09:54.860547 | orchestrator | 2025-06-01 23:09:54.860558 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-01 23:09:54.860568 | orchestrator | Sunday 01 June 2025 23:09:12 +0000 (0:00:00.376) 0:00:25.773 *********** 2025-06-01 23:09:54.860584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.860618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.860632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.860650 | orchestrator | 2025-06-01 23:09:54.860662 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-01 23:09:54.860673 | orchestrator | Sunday 01 June 2025 23:09:13 +0000 (0:00:01.208) 0:00:26.981 *********** 2025-06-01 23:09:54.860684 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:09:54.860695 | orchestrator | 2025-06-01 23:09:54.860706 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-01 23:09:54.860717 | orchestrator | Sunday 01 June 2025 23:09:13 +0000 (0:00:00.123) 0:00:27.105 *********** 2025-06-01 23:09:54.860728 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:09:54.860739 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:09:54.860793 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:09:54.860804 | orchestrator | 2025-06-01 23:09:54.860815 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-01 23:09:54.860826 | orchestrator | Sunday 01 June 2025 23:09:14 +0000 (0:00:00.763) 0:00:27.869 *********** 2025-06-01 23:09:54.860837 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:09:54.860848 | orchestrator | 2025-06-01 23:09:54.860859 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-01 23:09:54.860870 | orchestrator | Sunday 01 June 2025 23:09:15 +0000 (0:00:00.681) 0:00:28.550 *********** 2025-06-01 23:09:54.860882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.860908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.860921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.860939 | orchestrator | 2025-06-01 23:09:54.860951 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-01 23:09:54.860962 | orchestrator | Sunday 01 June 2025 23:09:17 +0000 (0:00:01.830) 0:00:30.381 *********** 2025-06-01 23:09:54.860973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 23:09:54.860984 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:09:54.860996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 23:09:54.861007 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:09:54.861027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 23:09:54.861044 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:09:54.861055 | orchestrator | 2025-06-01 23:09:54.861067 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-01 23:09:54.861077 | orchestrator | Sunday 01 June 2025 23:09:18 +0000 (0:00:01.096) 0:00:31.478 *********** 2025-06-01 23:09:54.861088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 23:09:54.861107 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:09:54.861118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 23:09:54.861130 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:09:54.861141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 23:09:54.861152 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:09:54.861162 | orchestrator | 2025-06-01 23:09:54.861173 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-01 23:09:54.861184 | orchestrator | Sunday 01 June 2025 23:09:18 +0000 (0:00:00.699) 0:00:32.178 *********** 2025-06-01 23:09:54.861209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.861221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.861241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.861252 | orchestrator | 2025-06-01 23:09:54.861263 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-01 23:09:54.861274 | orchestrator | Sunday 01 June 2025 23:09:20 +0000 (0:00:01.485) 0:00:33.663 *********** 2025-06-01 23:09:54.861285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.861297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.861322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.861340 | orchestrator | 2025-06-01 23:09:54.861351 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-01 23:09:54.861362 | orchestrator | Sunday 01 June 2025 23:09:22 +0000 (0:00:02.434) 0:00:36.097 *********** 2025-06-01 23:09:54.861373 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-01 23:09:54.861384 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-01 23:09:54.861395 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-01 23:09:54.861406 | orchestrator | 2025-06-01 23:09:54.861417 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-01 23:09:54.861427 | orchestrator | Sunday 01 June 2025 23:09:24 +0000 (0:00:01.525) 0:00:37.622 *********** 2025-06-01 23:09:54.861438 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:09:54.861449 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:09:54.861460 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:09:54.861470 | orchestrator | 2025-06-01 23:09:54.861481 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-01 23:09:54.861492 | orchestrator | Sunday 01 June 2025 23:09:25 +0000 (0:00:01.328) 0:00:38.950 *********** 2025-06-01 23:09:54.861503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 23:09:54.861530 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:09:54.861542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 23:09:54.861560 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:09:54.861590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 23:09:54.861602 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:09:54.861613 | orchestrator | 2025-06-01 23:09:54.861624 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-01 23:09:54.861635 | orchestrator | Sunday 01 June 2025 23:09:26 +0000 (0:00:00.531) 0:00:39.482 *********** 2025-06-01 23:09:54.861646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.861658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.861670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:09:54.861688 | orchestrator | 2025-06-01 23:09:54.861699 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-01 23:09:54.861709 | orchestrator | Sunday 01 June 2025 23:09:27 +0000 (0:00:01.459) 0:00:40.942 *********** 2025-06-01 23:09:54.861720 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:09:54.861731 | orchestrator | 2025-06-01 23:09:54.861742 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-01 23:09:54.861777 | orchestrator | Sunday 01 June 2025 23:09:29 +0000 (0:00:02.007) 0:00:42.950 *********** 2025-06-01 23:09:54.861788 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:09:54.861799 | orchestrator | 2025-06-01 23:09:54.861809 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-01 23:09:54.861820 | orchestrator | Sunday 01 June 2025 23:09:31 +0000 (0:00:02.063) 0:00:45.014 *********** 2025-06-01 23:09:54.861837 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:09:54.861848 | orchestrator | 2025-06-01 23:09:54.861859 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-01 23:09:54.861870 | orchestrator | Sunday 01 June 2025 23:09:44 +0000 (0:00:12.989) 0:00:58.003 *********** 2025-06-01 23:09:54.861881 | orchestrator | 2025-06-01 23:09:54.861896 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-01 23:09:54.861908 | orchestrator | Sunday 01 June 2025 23:09:44 +0000 (0:00:00.074) 0:00:58.077 *********** 2025-06-01 23:09:54.861918 | orchestrator | 2025-06-01 23:09:54.861929 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-01 23:09:54.861940 | orchestrator | Sunday 01 June 2025 23:09:44 +0000 (0:00:00.071) 0:00:58.148 *********** 2025-06-01 23:09:54.861950 | orchestrator | 2025-06-01 23:09:54.861961 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-01 23:09:54.861972 | orchestrator | Sunday 01 June 2025 23:09:44 +0000 (0:00:00.070) 0:00:58.219 *********** 2025-06-01 23:09:54.861982 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:09:54.861993 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:09:54.862004 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:09:54.862060 | orchestrator | 2025-06-01 23:09:54.862075 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:09:54.862088 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 23:09:54.862100 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 23:09:54.862112 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 23:09:54.862122 | orchestrator | 2025-06-01 23:09:54.862133 | orchestrator | 2025-06-01 23:09:54.862144 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:09:54.862155 | orchestrator | Sunday 01 June 2025 23:09:53 +0000 (0:00:08.207) 0:01:06.426 *********** 2025-06-01 23:09:54.862166 | orchestrator | =============================================================================== 2025-06-01 23:09:54.862177 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.99s 2025-06-01 23:09:54.862188 | orchestrator | placement : Restart placement-api container ----------------------------- 8.21s 2025-06-01 23:09:54.862198 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.31s 2025-06-01 23:09:54.862209 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.07s 2025-06-01 23:09:54.862220 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.84s 2025-06-01 23:09:54.862231 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.28s 2025-06-01 23:09:54.862242 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.25s 2025-06-01 23:09:54.862261 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.07s 2025-06-01 23:09:54.862272 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.43s 2025-06-01 23:09:54.862282 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.06s 2025-06-01 23:09:54.862293 | orchestrator | placement : Creating placement databases -------------------------------- 2.01s 2025-06-01 23:09:54.862304 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.83s 2025-06-01 23:09:54.862315 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.53s 2025-06-01 23:09:54.862325 | orchestrator | placement : Copying over config.json files for services ----------------- 1.49s 2025-06-01 23:09:54.862336 | orchestrator | placement : Check placement containers ---------------------------------- 1.46s 2025-06-01 23:09:54.862347 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.33s 2025-06-01 23:09:54.862358 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.21s 2025-06-01 23:09:54.862369 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.10s 2025-06-01 23:09:54.862379 | orchestrator | placement : Set placement policy file ----------------------------------- 0.76s 2025-06-01 23:09:54.862390 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.70s 2025-06-01 23:09:57.913350 | orchestrator | 2025-06-01 23:09:57 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:09:57.914335 | orchestrator | 2025-06-01 23:09:57 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:09:57.915603 | orchestrator | 2025-06-01 23:09:57 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:09:57.917096 | orchestrator | 2025-06-01 23:09:57 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:09:57.917120 | orchestrator | 2025-06-01 23:09:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:00.973152 | orchestrator | 2025-06-01 23:10:00 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:00.975440 | orchestrator | 2025-06-01 23:10:00 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:00.977848 | orchestrator | 2025-06-01 23:10:00 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:00.979700 | orchestrator | 2025-06-01 23:10:00 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:00.979773 | orchestrator | 2025-06-01 23:10:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:04.033197 | orchestrator | 2025-06-01 23:10:04 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:04.033431 | orchestrator | 2025-06-01 23:10:04 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:04.036720 | orchestrator | 2025-06-01 23:10:04 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:04.039056 | orchestrator | 2025-06-01 23:10:04 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:04.039089 | orchestrator | 2025-06-01 23:10:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:07.109429 | orchestrator | 2025-06-01 23:10:07 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:07.110975 | orchestrator | 2025-06-01 23:10:07 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:07.113989 | orchestrator | 2025-06-01 23:10:07 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:07.114637 | orchestrator | 2025-06-01 23:10:07 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:07.114668 | orchestrator | 2025-06-01 23:10:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:10.153717 | orchestrator | 2025-06-01 23:10:10 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:10.154889 | orchestrator | 2025-06-01 23:10:10 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:10.156407 | orchestrator | 2025-06-01 23:10:10 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:10.158361 | orchestrator | 2025-06-01 23:10:10 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:10.158396 | orchestrator | 2025-06-01 23:10:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:13.210944 | orchestrator | 2025-06-01 23:10:13 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:13.212817 | orchestrator | 2025-06-01 23:10:13 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:13.215248 | orchestrator | 2025-06-01 23:10:13 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:13.219398 | orchestrator | 2025-06-01 23:10:13 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:13.219456 | orchestrator | 2025-06-01 23:10:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:16.276020 | orchestrator | 2025-06-01 23:10:16 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:16.276132 | orchestrator | 2025-06-01 23:10:16 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:16.276987 | orchestrator | 2025-06-01 23:10:16 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:16.277956 | orchestrator | 2025-06-01 23:10:16 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:16.278230 | orchestrator | 2025-06-01 23:10:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:19.329223 | orchestrator | 2025-06-01 23:10:19 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:19.333233 | orchestrator | 2025-06-01 23:10:19 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:19.335451 | orchestrator | 2025-06-01 23:10:19 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:19.338687 | orchestrator | 2025-06-01 23:10:19 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:19.338735 | orchestrator | 2025-06-01 23:10:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:22.376090 | orchestrator | 2025-06-01 23:10:22 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:22.377633 | orchestrator | 2025-06-01 23:10:22 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:22.380005 | orchestrator | 2025-06-01 23:10:22 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:22.382991 | orchestrator | 2025-06-01 23:10:22 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:22.383031 | orchestrator | 2025-06-01 23:10:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:25.424940 | orchestrator | 2025-06-01 23:10:25 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:25.425188 | orchestrator | 2025-06-01 23:10:25 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:25.426636 | orchestrator | 2025-06-01 23:10:25 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:25.427696 | orchestrator | 2025-06-01 23:10:25 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:25.427971 | orchestrator | 2025-06-01 23:10:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:28.470348 | orchestrator | 2025-06-01 23:10:28 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:28.472576 | orchestrator | 2025-06-01 23:10:28 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:28.472608 | orchestrator | 2025-06-01 23:10:28 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:28.473521 | orchestrator | 2025-06-01 23:10:28 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:28.473543 | orchestrator | 2025-06-01 23:10:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:31.521244 | orchestrator | 2025-06-01 23:10:31 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:31.524045 | orchestrator | 2025-06-01 23:10:31 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:31.526310 | orchestrator | 2025-06-01 23:10:31 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:31.528013 | orchestrator | 2025-06-01 23:10:31 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:31.528042 | orchestrator | 2025-06-01 23:10:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:34.580322 | orchestrator | 2025-06-01 23:10:34 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:34.582248 | orchestrator | 2025-06-01 23:10:34 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:34.583115 | orchestrator | 2025-06-01 23:10:34 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:34.584888 | orchestrator | 2025-06-01 23:10:34 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:34.584915 | orchestrator | 2025-06-01 23:10:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:37.632208 | orchestrator | 2025-06-01 23:10:37 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:37.633744 | orchestrator | 2025-06-01 23:10:37 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:37.634736 | orchestrator | 2025-06-01 23:10:37 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:37.637018 | orchestrator | 2025-06-01 23:10:37 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state STARTED 2025-06-01 23:10:37.637106 | orchestrator | 2025-06-01 23:10:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:40.672400 | orchestrator | 2025-06-01 23:10:40 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:40.672660 | orchestrator | 2025-06-01 23:10:40 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:40.673254 | orchestrator | 2025-06-01 23:10:40 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:40.674746 | orchestrator | 2025-06-01 23:10:40 | INFO  | Task 3c745db8-f178-411d-bb74-dfc2ec08e2f7 is in state SUCCESS 2025-06-01 23:10:40.676434 | orchestrator | 2025-06-01 23:10:40.676501 | orchestrator | 2025-06-01 23:10:40.676515 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:10:40.676559 | orchestrator | 2025-06-01 23:10:40.676571 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:10:40.676583 | orchestrator | Sunday 01 June 2025 23:08:51 +0000 (0:00:00.281) 0:00:00.281 *********** 2025-06-01 23:10:40.676594 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:10:40.676606 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:10:40.676660 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:10:40.676674 | orchestrator | 2025-06-01 23:10:40.676686 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:10:40.676697 | orchestrator | Sunday 01 June 2025 23:08:51 +0000 (0:00:00.307) 0:00:00.588 *********** 2025-06-01 23:10:40.676708 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-01 23:10:40.676720 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-01 23:10:40.676731 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-01 23:10:40.676742 | orchestrator | 2025-06-01 23:10:40.676754 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-01 23:10:40.676765 | orchestrator | 2025-06-01 23:10:40.676825 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-01 23:10:40.676839 | orchestrator | Sunday 01 June 2025 23:08:51 +0000 (0:00:00.426) 0:00:01.014 *********** 2025-06-01 23:10:40.676850 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:10:40.676862 | orchestrator | 2025-06-01 23:10:40.676873 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-01 23:10:40.676884 | orchestrator | Sunday 01 June 2025 23:08:52 +0000 (0:00:00.533) 0:00:01.548 *********** 2025-06-01 23:10:40.676896 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-01 23:10:40.676907 | orchestrator | 2025-06-01 23:10:40.676918 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-01 23:10:40.676929 | orchestrator | Sunday 01 June 2025 23:08:55 +0000 (0:00:03.345) 0:00:04.893 *********** 2025-06-01 23:10:40.676940 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-01 23:10:40.676951 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-01 23:10:40.676962 | orchestrator | 2025-06-01 23:10:40.676972 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-01 23:10:40.676983 | orchestrator | Sunday 01 June 2025 23:09:02 +0000 (0:00:06.491) 0:00:11.385 *********** 2025-06-01 23:10:40.676994 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 23:10:40.677005 | orchestrator | 2025-06-01 23:10:40.677016 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-01 23:10:40.677027 | orchestrator | Sunday 01 June 2025 23:09:05 +0000 (0:00:03.398) 0:00:14.784 *********** 2025-06-01 23:10:40.677039 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 23:10:40.677053 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-01 23:10:40.677066 | orchestrator | 2025-06-01 23:10:40.677078 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-01 23:10:40.677090 | orchestrator | Sunday 01 June 2025 23:09:09 +0000 (0:00:03.868) 0:00:18.652 *********** 2025-06-01 23:10:40.677102 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 23:10:40.677114 | orchestrator | 2025-06-01 23:10:40.677126 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-01 23:10:40.677139 | orchestrator | Sunday 01 June 2025 23:09:13 +0000 (0:00:03.753) 0:00:22.406 *********** 2025-06-01 23:10:40.677152 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-01 23:10:40.677163 | orchestrator | 2025-06-01 23:10:40.677176 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-01 23:10:40.677188 | orchestrator | Sunday 01 June 2025 23:09:17 +0000 (0:00:04.062) 0:00:26.468 *********** 2025-06-01 23:10:40.677211 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:10:40.677223 | orchestrator | 2025-06-01 23:10:40.677236 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-01 23:10:40.677248 | orchestrator | Sunday 01 June 2025 23:09:20 +0000 (0:00:03.558) 0:00:30.027 *********** 2025-06-01 23:10:40.677261 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:10:40.677273 | orchestrator | 2025-06-01 23:10:40.677286 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-01 23:10:40.677298 | orchestrator | Sunday 01 June 2025 23:09:24 +0000 (0:00:03.926) 0:00:33.954 *********** 2025-06-01 23:10:40.677311 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:10:40.677324 | orchestrator | 2025-06-01 23:10:40.677336 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-01 23:10:40.677349 | orchestrator | Sunday 01 June 2025 23:09:28 +0000 (0:00:03.743) 0:00:37.698 *********** 2025-06-01 23:10:40.677383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.677406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.677419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.677432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.677453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.677472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.677485 | orchestrator | 2025-06-01 23:10:40.677496 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-01 23:10:40.677507 | orchestrator | Sunday 01 June 2025 23:09:29 +0000 (0:00:01.299) 0:00:38.997 *********** 2025-06-01 23:10:40.677519 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:10:40.677530 | orchestrator | 2025-06-01 23:10:40.677541 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-01 23:10:40.677551 | orchestrator | Sunday 01 June 2025 23:09:29 +0000 (0:00:00.142) 0:00:39.139 *********** 2025-06-01 23:10:40.677562 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:10:40.677573 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:10:40.677584 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:10:40.677595 | orchestrator | 2025-06-01 23:10:40.677606 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-01 23:10:40.677622 | orchestrator | Sunday 01 June 2025 23:09:30 +0000 (0:00:00.545) 0:00:39.685 *********** 2025-06-01 23:10:40.677633 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:10:40.677644 | orchestrator | 2025-06-01 23:10:40.677655 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-01 23:10:40.677666 | orchestrator | Sunday 01 June 2025 23:09:31 +0000 (0:00:00.914) 0:00:40.600 *********** 2025-06-01 23:10:40.677677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.677696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.677708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.677728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.677746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.677758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.677776 | orchestrator | 2025-06-01 23:10:40.677805 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-01 23:10:40.677816 | orchestrator | Sunday 01 June 2025 23:09:33 +0000 (0:00:02.264) 0:00:42.864 *********** 2025-06-01 23:10:40.677828 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:10:40.677839 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:10:40.677850 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:10:40.677861 | orchestrator | 2025-06-01 23:10:40.677872 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-01 23:10:40.677883 | orchestrator | Sunday 01 June 2025 23:09:33 +0000 (0:00:00.345) 0:00:43.210 *********** 2025-06-01 23:10:40.677894 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:10:40.677905 | orchestrator | 2025-06-01 23:10:40.677916 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-01 23:10:40.677927 | orchestrator | Sunday 01 June 2025 23:09:34 +0000 (0:00:00.773) 0:00:43.983 *********** 2025-06-01 23:10:40.677939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.677960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.677977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.677995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.678007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.678071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.678085 | orchestrator | 2025-06-01 23:10:40.678097 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-01 23:10:40.678109 | orchestrator | Sunday 01 June 2025 23:09:37 +0000 (0:00:02.326) 0:00:46.310 *********** 2025-06-01 23:10:40.678128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 23:10:40.678145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:10:40.678165 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:10:40.678177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 23:10:40.678189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:10:40.678200 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:10:40.678212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 23:10:40.678231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:10:40.678243 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:10:40.678270 | orchestrator | 2025-06-01 23:10:40.678283 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-01 23:10:40.678300 | orchestrator | Sunday 01 June 2025 23:09:37 +0000 (0:00:00.666) 0:00:46.977 *********** 2025-06-01 23:10:40.678312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 23:10:40.678324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:10:40.678335 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:10:40.678347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 23:10:40.678365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:10:40.678377 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:10:40.678394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 23:10:40.678415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:10:40.678427 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:10:40.678438 | orchestrator | 2025-06-01 23:10:40.678450 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-01 23:10:40.678462 | orchestrator | Sunday 01 June 2025 23:09:39 +0000 (0:00:01.405) 0:00:48.382 *********** 2025-06-01 23:10:40.678473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.678486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.678509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.678538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.678551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.678562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.678573 | orchestrator | 2025-06-01 23:10:40.678584 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-01 23:10:40.678596 | orchestrator | Sunday 01 June 2025 23:09:41 +0000 (0:00:02.631) 0:00:51.014 *********** 2025-06-01 23:10:40.678607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.678633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.678652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.678664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.678675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.678687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.678706 | orchestrator | 2025-06-01 23:10:40.678717 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-01 23:10:40.678734 | orchestrator | Sunday 01 June 2025 23:09:50 +0000 (0:00:09.099) 0:01:00.114 *********** 2025-06-01 23:10:40.678751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 23:10:40.678763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:10:40.678774 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:10:40.678804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 23:10:40.678817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:10:40.678828 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:10:40.678850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 23:10:40.678873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:10:40.678885 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:10:40.678896 | orchestrator | 2025-06-01 23:10:40.678907 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-01 23:10:40.678919 | orchestrator | Sunday 01 June 2025 23:09:51 +0000 (0:00:00.919) 0:01:01.033 *********** 2025-06-01 23:10:40.678930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.678942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.678954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:10:40.678980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.678997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.679008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:10:40.679020 | orchestrator | 2025-06-01 23:10:40.679031 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-01 23:10:40.679042 | orchestrator | Sunday 01 June 2025 23:09:53 +0000 (0:00:02.129) 0:01:03.163 *********** 2025-06-01 23:10:40.679053 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:10:40.679064 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:10:40.679075 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:10:40.679086 | orchestrator | 2025-06-01 23:10:40.679097 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-01 23:10:40.679108 | orchestrator | Sunday 01 June 2025 23:09:54 +0000 (0:00:00.304) 0:01:03.467 *********** 2025-06-01 23:10:40.679118 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:10:40.679129 | orchestrator | 2025-06-01 23:10:40.679140 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-01 23:10:40.679151 | orchestrator | Sunday 01 June 2025 23:09:56 +0000 (0:00:02.022) 0:01:05.490 *********** 2025-06-01 23:10:40.679162 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:10:40.679173 | orchestrator | 2025-06-01 23:10:40.679185 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-01 23:10:40.679202 | orchestrator | Sunday 01 June 2025 23:09:58 +0000 (0:00:02.087) 0:01:07.577 *********** 2025-06-01 23:10:40.679213 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:10:40.679224 | orchestrator | 2025-06-01 23:10:40.679235 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-01 23:10:40.679246 | orchestrator | Sunday 01 June 2025 23:10:13 +0000 (0:00:14.703) 0:01:22.281 *********** 2025-06-01 23:10:40.679257 | orchestrator | 2025-06-01 23:10:40.679268 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-01 23:10:40.679279 | orchestrator | Sunday 01 June 2025 23:10:13 +0000 (0:00:00.072) 0:01:22.354 *********** 2025-06-01 23:10:40.679291 | orchestrator | 2025-06-01 23:10:40.679301 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-01 23:10:40.679312 | orchestrator | Sunday 01 June 2025 23:10:13 +0000 (0:00:00.067) 0:01:22.421 *********** 2025-06-01 23:10:40.679323 | orchestrator | 2025-06-01 23:10:40.679334 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-01 23:10:40.679345 | orchestrator | Sunday 01 June 2025 23:10:13 +0000 (0:00:00.065) 0:01:22.487 *********** 2025-06-01 23:10:40.679356 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:10:40.679367 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:10:40.679379 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:10:40.679389 | orchestrator | 2025-06-01 23:10:40.679400 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-01 23:10:40.679411 | orchestrator | Sunday 01 June 2025 23:10:27 +0000 (0:00:14.178) 0:01:36.665 *********** 2025-06-01 23:10:40.679422 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:10:40.679433 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:10:40.679444 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:10:40.679455 | orchestrator | 2025-06-01 23:10:40.679471 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:10:40.679484 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 23:10:40.679496 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 23:10:40.679507 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 23:10:40.679518 | orchestrator | 2025-06-01 23:10:40.679529 | orchestrator | 2025-06-01 23:10:40.679540 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:10:40.679551 | orchestrator | Sunday 01 June 2025 23:10:38 +0000 (0:00:11.296) 0:01:47.963 *********** 2025-06-01 23:10:40.679562 | orchestrator | =============================================================================== 2025-06-01 23:10:40.679583 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.70s 2025-06-01 23:10:40.679595 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.18s 2025-06-01 23:10:40.679605 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.30s 2025-06-01 23:10:40.679616 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 9.10s 2025-06-01 23:10:40.679627 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.49s 2025-06-01 23:10:40.679638 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.06s 2025-06-01 23:10:40.679649 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.93s 2025-06-01 23:10:40.679660 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.87s 2025-06-01 23:10:40.679670 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.75s 2025-06-01 23:10:40.679681 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.74s 2025-06-01 23:10:40.679700 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.56s 2025-06-01 23:10:40.679711 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.40s 2025-06-01 23:10:40.679723 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.35s 2025-06-01 23:10:40.679733 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.63s 2025-06-01 23:10:40.679745 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.33s 2025-06-01 23:10:40.679756 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.26s 2025-06-01 23:10:40.679767 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.13s 2025-06-01 23:10:40.679777 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.09s 2025-06-01 23:10:40.679852 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.02s 2025-06-01 23:10:40.679865 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.41s 2025-06-01 23:10:40.679876 | orchestrator | 2025-06-01 23:10:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:43.716998 | orchestrator | 2025-06-01 23:10:43 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:43.717235 | orchestrator | 2025-06-01 23:10:43 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:43.718175 | orchestrator | 2025-06-01 23:10:43 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:43.719035 | orchestrator | 2025-06-01 23:10:43 | INFO  | Task 05643ad6-c283-4719-8720-a65156fa40b3 is in state STARTED 2025-06-01 23:10:43.719055 | orchestrator | 2025-06-01 23:10:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:46.747678 | orchestrator | 2025-06-01 23:10:46 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:46.748531 | orchestrator | 2025-06-01 23:10:46 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:46.749235 | orchestrator | 2025-06-01 23:10:46 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:46.751250 | orchestrator | 2025-06-01 23:10:46 | INFO  | Task 05643ad6-c283-4719-8720-a65156fa40b3 is in state STARTED 2025-06-01 23:10:46.751273 | orchestrator | 2025-06-01 23:10:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:49.793976 | orchestrator | 2025-06-01 23:10:49 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:49.794127 | orchestrator | 2025-06-01 23:10:49 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:49.794702 | orchestrator | 2025-06-01 23:10:49 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:49.799130 | orchestrator | 2025-06-01 23:10:49 | INFO  | Task 05643ad6-c283-4719-8720-a65156fa40b3 is in state STARTED 2025-06-01 23:10:49.799151 | orchestrator | 2025-06-01 23:10:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:52.827327 | orchestrator | 2025-06-01 23:10:52 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:52.829124 | orchestrator | 2025-06-01 23:10:52 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:52.831198 | orchestrator | 2025-06-01 23:10:52 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:52.832921 | orchestrator | 2025-06-01 23:10:52 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:10:52.834352 | orchestrator | 2025-06-01 23:10:52 | INFO  | Task 05643ad6-c283-4719-8720-a65156fa40b3 is in state SUCCESS 2025-06-01 23:10:52.834419 | orchestrator | 2025-06-01 23:10:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:55.881985 | orchestrator | 2025-06-01 23:10:55 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:55.885335 | orchestrator | 2025-06-01 23:10:55 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:55.888098 | orchestrator | 2025-06-01 23:10:55 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:55.890421 | orchestrator | 2025-06-01 23:10:55 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:10:55.890521 | orchestrator | 2025-06-01 23:10:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:10:58.944625 | orchestrator | 2025-06-01 23:10:58 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:10:58.946358 | orchestrator | 2025-06-01 23:10:58 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:10:58.948015 | orchestrator | 2025-06-01 23:10:58 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:10:58.950680 | orchestrator | 2025-06-01 23:10:58 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:10:58.950704 | orchestrator | 2025-06-01 23:10:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:02.009639 | orchestrator | 2025-06-01 23:11:02 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:11:02.012118 | orchestrator | 2025-06-01 23:11:02 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:02.014101 | orchestrator | 2025-06-01 23:11:02 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:11:02.017214 | orchestrator | 2025-06-01 23:11:02 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:02.017243 | orchestrator | 2025-06-01 23:11:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:05.072468 | orchestrator | 2025-06-01 23:11:05 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:11:05.075923 | orchestrator | 2025-06-01 23:11:05 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:05.079700 | orchestrator | 2025-06-01 23:11:05 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:11:05.081851 | orchestrator | 2025-06-01 23:11:05 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:05.082194 | orchestrator | 2025-06-01 23:11:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:08.132329 | orchestrator | 2025-06-01 23:11:08 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:11:08.134154 | orchestrator | 2025-06-01 23:11:08 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:08.135521 | orchestrator | 2025-06-01 23:11:08 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:11:08.137765 | orchestrator | 2025-06-01 23:11:08 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:08.137800 | orchestrator | 2025-06-01 23:11:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:11.186915 | orchestrator | 2025-06-01 23:11:11 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state STARTED 2025-06-01 23:11:11.188689 | orchestrator | 2025-06-01 23:11:11 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:11.190248 | orchestrator | 2025-06-01 23:11:11 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:11:11.193036 | orchestrator | 2025-06-01 23:11:11 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:11.193061 | orchestrator | 2025-06-01 23:11:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:14.241582 | orchestrator | 2025-06-01 23:11:14 | INFO  | Task f0436069-7cdb-480c-932d-68dcd5789fb7 is in state SUCCESS 2025-06-01 23:11:14.244153 | orchestrator | 2025-06-01 23:11:14 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:14.247378 | orchestrator | 2025-06-01 23:11:14 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:11:14.249210 | orchestrator | 2025-06-01 23:11:14 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:14.249235 | orchestrator | 2025-06-01 23:11:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:17.301057 | orchestrator | 2025-06-01 23:11:17 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:17.301448 | orchestrator | 2025-06-01 23:11:17 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:11:17.303742 | orchestrator | 2025-06-01 23:11:17 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:17.303876 | orchestrator | 2025-06-01 23:11:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:20.351490 | orchestrator | 2025-06-01 23:11:20 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:20.352782 | orchestrator | 2025-06-01 23:11:20 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state STARTED 2025-06-01 23:11:20.354793 | orchestrator | 2025-06-01 23:11:20 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:20.355098 | orchestrator | 2025-06-01 23:11:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:23.409492 | orchestrator | 2025-06-01 23:11:23 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:23.414599 | orchestrator | 2025-06-01 23:11:23 | INFO  | Task 581ff2af-ba2c-4b38-801f-b53638449c80 is in state SUCCESS 2025-06-01 23:11:23.417394 | orchestrator | 2025-06-01 23:11:23.417434 | orchestrator | 2025-06-01 23:11:23.417447 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:11:23.417460 | orchestrator | 2025-06-01 23:11:23.417499 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:11:23.417511 | orchestrator | Sunday 01 June 2025 23:10:47 +0000 (0:00:00.561) 0:00:00.561 *********** 2025-06-01 23:11:23.417523 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:11:23.417535 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:11:23.417546 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:11:23.417557 | orchestrator | 2025-06-01 23:11:23.417568 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:11:23.417580 | orchestrator | Sunday 01 June 2025 23:10:48 +0000 (0:00:00.771) 0:00:01.332 *********** 2025-06-01 23:11:23.417591 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-01 23:11:23.417603 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-01 23:11:23.417614 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-01 23:11:23.417625 | orchestrator | 2025-06-01 23:11:23.417636 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-01 23:11:23.417647 | orchestrator | 2025-06-01 23:11:23.417658 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-01 23:11:23.417669 | orchestrator | Sunday 01 June 2025 23:10:49 +0000 (0:00:01.370) 0:00:02.702 *********** 2025-06-01 23:11:23.417680 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:11:23.417718 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:11:23.417730 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:11:23.417741 | orchestrator | 2025-06-01 23:11:23.417752 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:11:23.417764 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:11:23.417778 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:11:23.417789 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:11:23.417799 | orchestrator | 2025-06-01 23:11:23.417810 | orchestrator | 2025-06-01 23:11:23.417885 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:11:23.417898 | orchestrator | Sunday 01 June 2025 23:10:50 +0000 (0:00:00.961) 0:00:03.664 *********** 2025-06-01 23:11:23.417909 | orchestrator | =============================================================================== 2025-06-01 23:11:23.417920 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.37s 2025-06-01 23:11:23.417931 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.96s 2025-06-01 23:11:23.417941 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.77s 2025-06-01 23:11:23.417952 | orchestrator | 2025-06-01 23:11:23.417963 | orchestrator | 2025-06-01 23:11:23.417974 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-01 23:11:23.417986 | orchestrator | 2025-06-01 23:11:23.417999 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-01 23:11:23.418010 | orchestrator | Sunday 01 June 2025 23:06:22 +0000 (0:00:00.200) 0:00:00.200 *********** 2025-06-01 23:11:23.418218 | orchestrator | changed: [localhost] 2025-06-01 23:11:23.418233 | orchestrator | 2025-06-01 23:11:23.418245 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-01 23:11:23.418258 | orchestrator | Sunday 01 June 2025 23:06:23 +0000 (0:00:01.183) 0:00:01.384 *********** 2025-06-01 23:11:23.418271 | orchestrator | 2025-06-01 23:11:23.418283 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-01 23:11:23.418296 | orchestrator | 2025-06-01 23:11:23.418308 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-01 23:11:23.418321 | orchestrator | 2025-06-01 23:11:23.418334 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-01 23:11:23.418347 | orchestrator | 2025-06-01 23:11:23.418360 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-01 23:11:23.418371 | orchestrator | 2025-06-01 23:11:23.418382 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-01 23:11:23.418393 | orchestrator | changed: [localhost] 2025-06-01 23:11:23.418403 | orchestrator | 2025-06-01 23:11:23.418414 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-01 23:11:23.418439 | orchestrator | Sunday 01 June 2025 23:10:54 +0000 (0:04:31.120) 0:04:32.504 *********** 2025-06-01 23:11:23.418450 | orchestrator | changed: [localhost] 2025-06-01 23:11:23.418460 | orchestrator | 2025-06-01 23:11:23.418472 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:11:23.418482 | orchestrator | 2025-06-01 23:11:23.418493 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:11:23.418504 | orchestrator | Sunday 01 June 2025 23:11:12 +0000 (0:00:17.338) 0:04:49.843 *********** 2025-06-01 23:11:23.418515 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:11:23.418525 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:11:23.418536 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:11:23.418547 | orchestrator | 2025-06-01 23:11:23.418558 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:11:23.418568 | orchestrator | Sunday 01 June 2025 23:11:12 +0000 (0:00:00.460) 0:04:50.304 *********** 2025-06-01 23:11:23.418590 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-01 23:11:23.418602 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-01 23:11:23.418613 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-01 23:11:23.418624 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-01 23:11:23.418635 | orchestrator | 2025-06-01 23:11:23.418645 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-01 23:11:23.418656 | orchestrator | skipping: no hosts matched 2025-06-01 23:11:23.418668 | orchestrator | 2025-06-01 23:11:23.418679 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:11:23.418703 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:11:23.418716 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:11:23.418727 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:11:23.418738 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:11:23.418749 | orchestrator | 2025-06-01 23:11:23.418760 | orchestrator | 2025-06-01 23:11:23.418771 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:11:23.418781 | orchestrator | Sunday 01 June 2025 23:11:12 +0000 (0:00:00.439) 0:04:50.743 *********** 2025-06-01 23:11:23.418792 | orchestrator | =============================================================================== 2025-06-01 23:11:23.418803 | orchestrator | Download ironic-agent initramfs --------------------------------------- 271.12s 2025-06-01 23:11:23.418813 | orchestrator | Download ironic-agent kernel ------------------------------------------- 17.34s 2025-06-01 23:11:23.418849 | orchestrator | Ensure the destination directory exists --------------------------------- 1.18s 2025-06-01 23:11:23.418860 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2025-06-01 23:11:23.418941 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-06-01 23:11:23.418953 | orchestrator | 2025-06-01 23:11:23.418964 | orchestrator | 2025-06-01 23:11:23.418974 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:11:23.418985 | orchestrator | 2025-06-01 23:11:23.418996 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-01 23:11:23.419007 | orchestrator | Sunday 01 June 2025 23:01:31 +0000 (0:00:00.297) 0:00:00.297 *********** 2025-06-01 23:11:23.419099 | orchestrator | changed: [testbed-manager] 2025-06-01 23:11:23.419110 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.419140 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:11:23.419151 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:11:23.419162 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:11:23.419173 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:11:23.419183 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:11:23.419238 | orchestrator | 2025-06-01 23:11:23.419250 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:11:23.419261 | orchestrator | Sunday 01 June 2025 23:01:32 +0000 (0:00:01.036) 0:00:01.334 *********** 2025-06-01 23:11:23.419272 | orchestrator | changed: [testbed-manager] 2025-06-01 23:11:23.419283 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.419294 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:11:23.419305 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:11:23.419315 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:11:23.419326 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:11:23.419337 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:11:23.419348 | orchestrator | 2025-06-01 23:11:23.419359 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:11:23.419379 | orchestrator | Sunday 01 June 2025 23:01:33 +0000 (0:00:00.941) 0:00:02.275 *********** 2025-06-01 23:11:23.419390 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-01 23:11:23.419401 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-01 23:11:23.419411 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-01 23:11:23.419422 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-01 23:11:23.419433 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-01 23:11:23.419443 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-01 23:11:23.419454 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-01 23:11:23.419465 | orchestrator | 2025-06-01 23:11:23.419475 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-01 23:11:23.419486 | orchestrator | 2025-06-01 23:11:23.419497 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-01 23:11:23.419508 | orchestrator | Sunday 01 June 2025 23:01:34 +0000 (0:00:01.255) 0:00:03.531 *********** 2025-06-01 23:11:23.419525 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:11:23.419536 | orchestrator | 2025-06-01 23:11:23.419547 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-01 23:11:23.419558 | orchestrator | Sunday 01 June 2025 23:01:35 +0000 (0:00:00.857) 0:00:04.388 *********** 2025-06-01 23:11:23.419569 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-01 23:11:23.419580 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-01 23:11:23.419611 | orchestrator | 2025-06-01 23:11:23.419622 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-01 23:11:23.419633 | orchestrator | Sunday 01 June 2025 23:01:38 +0000 (0:00:03.435) 0:00:07.824 *********** 2025-06-01 23:11:23.419644 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 23:11:23.419655 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 23:11:23.419666 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.419676 | orchestrator | 2025-06-01 23:11:23.419687 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-01 23:11:23.419698 | orchestrator | Sunday 01 June 2025 23:01:42 +0000 (0:00:03.608) 0:00:11.432 *********** 2025-06-01 23:11:23.419729 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.419741 | orchestrator | 2025-06-01 23:11:23.419762 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-01 23:11:23.419773 | orchestrator | Sunday 01 June 2025 23:01:42 +0000 (0:00:00.775) 0:00:12.207 *********** 2025-06-01 23:11:23.419785 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.419816 | orchestrator | 2025-06-01 23:11:23.419865 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-01 23:11:23.419876 | orchestrator | Sunday 01 June 2025 23:01:44 +0000 (0:00:01.663) 0:00:13.871 *********** 2025-06-01 23:11:23.419887 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.419898 | orchestrator | 2025-06-01 23:11:23.419909 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-01 23:11:23.419920 | orchestrator | Sunday 01 June 2025 23:01:47 +0000 (0:00:02.691) 0:00:16.563 *********** 2025-06-01 23:11:23.419930 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.419941 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.419952 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.419963 | orchestrator | 2025-06-01 23:11:23.419974 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-01 23:11:23.419984 | orchestrator | Sunday 01 June 2025 23:01:47 +0000 (0:00:00.540) 0:00:17.103 *********** 2025-06-01 23:11:23.419995 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:11:23.420006 | orchestrator | 2025-06-01 23:11:23.420016 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-01 23:11:23.420027 | orchestrator | Sunday 01 June 2025 23:02:16 +0000 (0:00:28.138) 0:00:45.242 *********** 2025-06-01 23:11:23.420046 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.420057 | orchestrator | 2025-06-01 23:11:23.420067 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-01 23:11:23.420078 | orchestrator | Sunday 01 June 2025 23:02:29 +0000 (0:00:13.135) 0:00:58.377 *********** 2025-06-01 23:11:23.420089 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:11:23.420100 | orchestrator | 2025-06-01 23:11:23.420110 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-01 23:11:23.420121 | orchestrator | Sunday 01 June 2025 23:02:40 +0000 (0:00:10.997) 0:01:09.375 *********** 2025-06-01 23:11:23.420132 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:11:23.420143 | orchestrator | 2025-06-01 23:11:23.420154 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-01 23:11:23.420165 | orchestrator | Sunday 01 June 2025 23:02:43 +0000 (0:00:03.179) 0:01:12.554 *********** 2025-06-01 23:11:23.420176 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.420186 | orchestrator | 2025-06-01 23:11:23.420197 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-01 23:11:23.420207 | orchestrator | Sunday 01 June 2025 23:02:44 +0000 (0:00:01.276) 0:01:13.830 *********** 2025-06-01 23:11:23.420218 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:11:23.420229 | orchestrator | 2025-06-01 23:11:23.420240 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-01 23:11:23.420250 | orchestrator | Sunday 01 June 2025 23:02:45 +0000 (0:00:01.285) 0:01:15.115 *********** 2025-06-01 23:11:23.420261 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:11:23.420272 | orchestrator | 2025-06-01 23:11:23.420282 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-01 23:11:23.420293 | orchestrator | Sunday 01 June 2025 23:03:01 +0000 (0:00:16.058) 0:01:31.174 *********** 2025-06-01 23:11:23.420304 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.420314 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.420325 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.420336 | orchestrator | 2025-06-01 23:11:23.420347 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-01 23:11:23.420357 | orchestrator | 2025-06-01 23:11:23.420368 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-01 23:11:23.420378 | orchestrator | Sunday 01 June 2025 23:03:02 +0000 (0:00:00.382) 0:01:31.557 *********** 2025-06-01 23:11:23.420389 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:11:23.420400 | orchestrator | 2025-06-01 23:11:23.420411 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-01 23:11:23.420421 | orchestrator | Sunday 01 June 2025 23:03:03 +0000 (0:00:00.718) 0:01:32.277 *********** 2025-06-01 23:11:23.420432 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.420443 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.420453 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.420464 | orchestrator | 2025-06-01 23:11:23.420475 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-01 23:11:23.420486 | orchestrator | Sunday 01 June 2025 23:03:05 +0000 (0:00:01.974) 0:01:34.251 *********** 2025-06-01 23:11:23.420496 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.420513 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.420524 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.420535 | orchestrator | 2025-06-01 23:11:23.420546 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-01 23:11:23.420556 | orchestrator | Sunday 01 June 2025 23:03:07 +0000 (0:00:01.986) 0:01:36.238 *********** 2025-06-01 23:11:23.420567 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.420578 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.420588 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.420605 | orchestrator | 2025-06-01 23:11:23.420616 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-01 23:11:23.420627 | orchestrator | Sunday 01 June 2025 23:03:07 +0000 (0:00:00.335) 0:01:36.574 *********** 2025-06-01 23:11:23.420638 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-01 23:11:23.420649 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.420659 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-01 23:11:23.420670 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.420681 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-01 23:11:23.420692 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-01 23:11:23.420702 | orchestrator | 2025-06-01 23:11:23.420713 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-01 23:11:23.420724 | orchestrator | Sunday 01 June 2025 23:03:15 +0000 (0:00:08.327) 0:01:44.901 *********** 2025-06-01 23:11:23.420735 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.420745 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.420756 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.420767 | orchestrator | 2025-06-01 23:11:23.420783 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-01 23:11:23.420794 | orchestrator | Sunday 01 June 2025 23:03:16 +0000 (0:00:00.799) 0:01:45.701 *********** 2025-06-01 23:11:23.420805 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-01 23:11:23.420816 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.420893 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-01 23:11:23.420912 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.420923 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-01 23:11:23.420934 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.420945 | orchestrator | 2025-06-01 23:11:23.420956 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-01 23:11:23.420967 | orchestrator | Sunday 01 June 2025 23:03:18 +0000 (0:00:02.339) 0:01:48.040 *********** 2025-06-01 23:11:23.420977 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.420988 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.420999 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.421009 | orchestrator | 2025-06-01 23:11:23.421020 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-01 23:11:23.421031 | orchestrator | Sunday 01 June 2025 23:03:20 +0000 (0:00:01.949) 0:01:49.990 *********** 2025-06-01 23:11:23.421042 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.421052 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.421063 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.421074 | orchestrator | 2025-06-01 23:11:23.421085 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-01 23:11:23.421096 | orchestrator | Sunday 01 June 2025 23:03:22 +0000 (0:00:01.289) 0:01:51.279 *********** 2025-06-01 23:11:23.421106 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.421117 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.421127 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.421138 | orchestrator | 2025-06-01 23:11:23.421149 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-01 23:11:23.421160 | orchestrator | Sunday 01 June 2025 23:03:25 +0000 (0:00:03.366) 0:01:54.645 *********** 2025-06-01 23:11:23.421171 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.421181 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.421192 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:11:23.421203 | orchestrator | 2025-06-01 23:11:23.421214 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-01 23:11:23.421225 | orchestrator | Sunday 01 June 2025 23:03:45 +0000 (0:00:19.814) 0:02:14.459 *********** 2025-06-01 23:11:23.421235 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.421246 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.421257 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:11:23.421275 | orchestrator | 2025-06-01 23:11:23.421286 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-01 23:11:23.421297 | orchestrator | Sunday 01 June 2025 23:03:58 +0000 (0:00:12.891) 0:02:27.351 *********** 2025-06-01 23:11:23.421308 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.421319 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.421330 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:11:23.421341 | orchestrator | 2025-06-01 23:11:23.421351 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-01 23:11:23.421362 | orchestrator | Sunday 01 June 2025 23:04:00 +0000 (0:00:02.747) 0:02:30.099 *********** 2025-06-01 23:11:23.421373 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.421383 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.421394 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.421405 | orchestrator | 2025-06-01 23:11:23.421415 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-01 23:11:23.421426 | orchestrator | Sunday 01 June 2025 23:04:13 +0000 (0:00:12.747) 0:02:42.846 *********** 2025-06-01 23:11:23.421437 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.421448 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.421458 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.421469 | orchestrator | 2025-06-01 23:11:23.421479 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-01 23:11:23.421490 | orchestrator | Sunday 01 June 2025 23:04:15 +0000 (0:00:01.602) 0:02:44.449 *********** 2025-06-01 23:11:23.421501 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.421512 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.421523 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.421534 | orchestrator | 2025-06-01 23:11:23.421544 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-01 23:11:23.421555 | orchestrator | 2025-06-01 23:11:23.421572 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-01 23:11:23.421583 | orchestrator | Sunday 01 June 2025 23:04:15 +0000 (0:00:00.353) 0:02:44.802 *********** 2025-06-01 23:11:23.421594 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:11:23.421605 | orchestrator | 2025-06-01 23:11:23.421615 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-01 23:11:23.421626 | orchestrator | Sunday 01 June 2025 23:04:16 +0000 (0:00:00.521) 0:02:45.324 *********** 2025-06-01 23:11:23.421637 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-01 23:11:23.421647 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-01 23:11:23.421658 | orchestrator | 2025-06-01 23:11:23.421669 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-01 23:11:23.421679 | orchestrator | Sunday 01 June 2025 23:04:19 +0000 (0:00:02.952) 0:02:48.277 *********** 2025-06-01 23:11:23.421690 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-01 23:11:23.421702 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-01 23:11:23.421713 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-01 23:11:23.421730 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-01 23:11:23.421741 | orchestrator | 2025-06-01 23:11:23.421752 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-01 23:11:23.421763 | orchestrator | Sunday 01 June 2025 23:04:25 +0000 (0:00:06.648) 0:02:54.925 *********** 2025-06-01 23:11:23.421773 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 23:11:23.421784 | orchestrator | 2025-06-01 23:11:23.421795 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-01 23:11:23.421813 | orchestrator | Sunday 01 June 2025 23:04:28 +0000 (0:00:03.236) 0:02:58.162 *********** 2025-06-01 23:11:23.421851 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 23:11:23.421862 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-01 23:11:23.421873 | orchestrator | 2025-06-01 23:11:23.421884 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-01 23:11:23.421895 | orchestrator | Sunday 01 June 2025 23:04:32 +0000 (0:00:03.800) 0:03:01.963 *********** 2025-06-01 23:11:23.421906 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 23:11:23.421917 | orchestrator | 2025-06-01 23:11:23.421927 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-01 23:11:23.421938 | orchestrator | Sunday 01 June 2025 23:04:35 +0000 (0:00:03.066) 0:03:05.030 *********** 2025-06-01 23:11:23.421949 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-01 23:11:23.421959 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-01 23:11:23.421970 | orchestrator | 2025-06-01 23:11:23.421981 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-01 23:11:23.421992 | orchestrator | Sunday 01 June 2025 23:04:42 +0000 (0:00:07.091) 0:03:12.121 *********** 2025-06-01 23:11:23.422009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.422069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.422094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.422117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.422130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.422142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.422153 | orchestrator | 2025-06-01 23:11:23.422164 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-01 23:11:23.422175 | orchestrator | Sunday 01 June 2025 23:04:44 +0000 (0:00:01.763) 0:03:13.885 *********** 2025-06-01 23:11:23.422186 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.422197 | orchestrator | 2025-06-01 23:11:23.422207 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-01 23:11:23.422223 | orchestrator | Sunday 01 June 2025 23:04:44 +0000 (0:00:00.155) 0:03:14.041 *********** 2025-06-01 23:11:23.422234 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.422245 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.422256 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.422266 | orchestrator | 2025-06-01 23:11:23.422277 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-01 23:11:23.422288 | orchestrator | Sunday 01 June 2025 23:04:45 +0000 (0:00:00.551) 0:03:14.592 *********** 2025-06-01 23:11:23.422299 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:11:23.422309 | orchestrator | 2025-06-01 23:11:23.422320 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-01 23:11:23.422350 | orchestrator | Sunday 01 June 2025 23:04:46 +0000 (0:00:01.188) 0:03:15.781 *********** 2025-06-01 23:11:23.422361 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.422372 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.422383 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.422393 | orchestrator | 2025-06-01 23:11:23.422404 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-01 23:11:23.422415 | orchestrator | Sunday 01 June 2025 23:04:47 +0000 (0:00:00.585) 0:03:16.367 *********** 2025-06-01 23:11:23.422425 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:11:23.422436 | orchestrator | 2025-06-01 23:11:23.422447 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-01 23:11:23.422458 | orchestrator | Sunday 01 June 2025 23:04:48 +0000 (0:00:00.883) 0:03:17.250 *********** 2025-06-01 23:11:23.422477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.422491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.422509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.422555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.422568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.422579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.422590 | orchestrator | 2025-06-01 23:11:23.422602 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-01 23:11:23.422613 | orchestrator | Sunday 01 June 2025 23:04:50 +0000 (0:00:02.292) 0:03:19.543 *********** 2025-06-01 23:11:23.422624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 23:11:23.422647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.422659 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.422679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 23:11:23.422692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.422703 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.422715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 23:11:23.422733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.422750 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.422761 | orchestrator | 2025-06-01 23:11:23.422772 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-01 23:11:23.422783 | orchestrator | Sunday 01 June 2025 23:04:51 +0000 (0:00:00.784) 0:03:20.327 *********** 2025-06-01 23:11:23.422802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 23:11:23.422814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.422850 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.422862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 23:11:23.422883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.422899 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.422919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 23:11:23.422932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.422943 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.422954 | orchestrator | 2025-06-01 23:11:23.422965 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-01 23:11:23.422976 | orchestrator | Sunday 01 June 2025 23:04:52 +0000 (0:00:01.515) 0:03:21.843 *********** 2025-06-01 23:11:23.422988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.423011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.423030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.423043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.423054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.423066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.423083 | orchestrator | 2025-06-01 23:11:23.423094 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-01 23:11:23.423105 | orchestrator | Sunday 01 June 2025 23:04:55 +0000 (0:00:02.778) 0:03:24.621 *********** 2025-06-01 23:11:23.423125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.423146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.423159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.423176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.423193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.423204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.423216 | orchestrator | 2025-06-01 23:11:23.423227 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-01 23:11:23.423244 | orchestrator | Sunday 01 June 2025 23:05:04 +0000 (0:00:09.296) 0:03:33.917 *********** 2025-06-01 23:11:23.423255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 23:11:23.423267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.423284 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.423301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 23:11:23.423313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.423324 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.423345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 23:11:23.423357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.423375 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.423386 | orchestrator | 2025-06-01 23:11:23.423397 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-01 23:11:23.423408 | orchestrator | Sunday 01 June 2025 23:05:05 +0000 (0:00:00.996) 0:03:34.913 *********** 2025-06-01 23:11:23.423419 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.423430 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:11:23.423441 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:11:23.423452 | orchestrator | 2025-06-01 23:11:23.423463 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-01 23:11:23.423474 | orchestrator | Sunday 01 June 2025 23:05:08 +0000 (0:00:02.673) 0:03:37.587 *********** 2025-06-01 23:11:23.423484 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.423495 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.423506 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.423516 | orchestrator | 2025-06-01 23:11:23.423527 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-01 23:11:23.423538 | orchestrator | Sunday 01 June 2025 23:05:08 +0000 (0:00:00.360) 0:03:37.947 *********** 2025-06-01 23:11:23.423554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.423575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.423589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:11:23.423606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.423623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.423635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.423646 | orchestrator | 2025-06-01 23:11:23.423657 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-01 23:11:23.423668 | orchestrator | Sunday 01 June 2025 23:05:10 +0000 (0:00:01.956) 0:03:39.904 *********** 2025-06-01 23:11:23.423678 | orchestrator | 2025-06-01 23:11:23.423689 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-01 23:11:23.423700 | orchestrator | Sunday 01 June 2025 23:05:10 +0000 (0:00:00.284) 0:03:40.189 *********** 2025-06-01 23:11:23.423711 | orchestrator | 2025-06-01 23:11:23.424563 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-01 23:11:23.424592 | orchestrator | Sunday 01 June 2025 23:05:11 +0000 (0:00:00.302) 0:03:40.491 *********** 2025-06-01 23:11:23.424601 | orchestrator | 2025-06-01 23:11:23.424611 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-01 23:11:23.424621 | orchestrator | Sunday 01 June 2025 23:05:11 +0000 (0:00:00.535) 0:03:41.027 *********** 2025-06-01 23:11:23.424631 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.424813 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:11:23.424848 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:11:23.424858 | orchestrator | 2025-06-01 23:11:23.424867 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-01 23:11:23.424877 | orchestrator | Sunday 01 June 2025 23:05:35 +0000 (0:00:24.175) 0:04:05.202 *********** 2025-06-01 23:11:23.424887 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.424896 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:11:23.424906 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:11:23.424916 | orchestrator | 2025-06-01 23:11:23.424925 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-01 23:11:23.424935 | orchestrator | 2025-06-01 23:11:23.424945 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-01 23:11:23.424954 | orchestrator | Sunday 01 June 2025 23:05:47 +0000 (0:00:11.221) 0:04:16.423 *********** 2025-06-01 23:11:23.424965 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:11:23.424975 | orchestrator | 2025-06-01 23:11:23.424985 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-01 23:11:23.424994 | orchestrator | Sunday 01 June 2025 23:05:49 +0000 (0:00:02.001) 0:04:18.424 *********** 2025-06-01 23:11:23.425004 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.425014 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.425024 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.425033 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.425043 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.425053 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.425063 | orchestrator | 2025-06-01 23:11:23.425073 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-01 23:11:23.425082 | orchestrator | Sunday 01 June 2025 23:05:50 +0000 (0:00:01.469) 0:04:19.894 *********** 2025-06-01 23:11:23.425092 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.425102 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.425111 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.425121 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:11:23.425131 | orchestrator | 2025-06-01 23:11:23.425141 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-01 23:11:23.425151 | orchestrator | Sunday 01 June 2025 23:05:51 +0000 (0:00:01.071) 0:04:20.966 *********** 2025-06-01 23:11:23.425161 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-01 23:11:23.425171 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-01 23:11:23.425180 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-01 23:11:23.425190 | orchestrator | 2025-06-01 23:11:23.425200 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-01 23:11:23.425209 | orchestrator | Sunday 01 June 2025 23:05:52 +0000 (0:00:00.701) 0:04:21.667 *********** 2025-06-01 23:11:23.425219 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-01 23:11:23.425229 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-01 23:11:23.425238 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-01 23:11:23.425248 | orchestrator | 2025-06-01 23:11:23.425258 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-01 23:11:23.425267 | orchestrator | Sunday 01 June 2025 23:05:53 +0000 (0:00:01.228) 0:04:22.896 *********** 2025-06-01 23:11:23.425277 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-01 23:11:23.425287 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.425296 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-01 23:11:23.425306 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.425316 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-01 23:11:23.425333 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.425350 | orchestrator | 2025-06-01 23:11:23.425360 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-01 23:11:23.425369 | orchestrator | Sunday 01 June 2025 23:05:54 +0000 (0:00:00.581) 0:04:23.477 *********** 2025-06-01 23:11:23.425379 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 23:11:23.425389 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 23:11:23.425398 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.425408 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-01 23:11:23.425418 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-01 23:11:23.425427 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 23:11:23.425437 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 23:11:23.425447 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-01 23:11:23.425459 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.425470 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 23:11:23.425481 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 23:11:23.425492 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.425536 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-01 23:11:23.425549 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-01 23:11:23.425561 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-01 23:11:23.425572 | orchestrator | 2025-06-01 23:11:23.425583 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-01 23:11:23.425594 | orchestrator | Sunday 01 June 2025 23:05:55 +0000 (0:00:01.081) 0:04:24.559 *********** 2025-06-01 23:11:23.425605 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.425616 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.425627 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.425639 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:11:23.425650 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:11:23.425661 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:11:23.425672 | orchestrator | 2025-06-01 23:11:23.425683 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-01 23:11:23.425694 | orchestrator | Sunday 01 June 2025 23:05:56 +0000 (0:00:01.644) 0:04:26.204 *********** 2025-06-01 23:11:23.425705 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.425716 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.425728 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.425739 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:11:23.425750 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:11:23.425762 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:11:23.425773 | orchestrator | 2025-06-01 23:11:23.425785 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-01 23:11:23.425796 | orchestrator | Sunday 01 June 2025 23:05:59 +0000 (0:00:02.031) 0:04:28.235 *********** 2025-06-01 23:11:23.425808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 23:11:23.425842 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 23:11:23.425858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 23:11:23.425899 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 23:11:23.425912 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 23:11:23.425922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.425933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.425951 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 23:11:23.425971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426112 | orchestrator | 2025-06-01 23:11:23.426123 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-01 23:11:23.426132 | orchestrator | Sunday 01 June 2025 23:06:03 +0000 (0:00:04.121) 0:04:32.357 *********** 2025-06-01 23:11:23.426143 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:11:23.426153 | orchestrator | 2025-06-01 23:11:23.426162 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-01 23:11:23.426172 | orchestrator | Sunday 01 June 2025 23:06:04 +0000 (0:00:01.225) 0:04:33.583 *********** 2025-06-01 23:11:23.426211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426223 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426423 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.426451 | orchestrator | 2025-06-01 23:11:23.426461 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-01 23:11:23.426470 | orchestrator | Sunday 01 June 2025 23:06:09 +0000 (0:00:04.668) 0:04:38.252 *********** 2025-06-01 23:11:23.426485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 23:11:23.426496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 23:11:23.426531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.426543 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.426560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 23:11:23.426570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 23:11:23.426580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.426590 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.426605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 23:11:23.426640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 23:11:23.426652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.426668 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.426678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 23:11:23.426689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.426698 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.426713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 23:11:23.426723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.426733 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.426769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 23:11:23.426787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.426797 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.426807 | orchestrator | 2025-06-01 23:11:23.426816 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-01 23:11:23.426842 | orchestrator | Sunday 01 June 2025 23:06:12 +0000 (0:00:03.055) 0:04:41.307 *********** 2025-06-01 23:11:23.426853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 23:11:23.426863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 23:11:23.426878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.426889 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.426927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 23:11:23.426949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 23:11:23.426959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.426969 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.426979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 23:11:23.426994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 23:11:23.427004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.427020 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.427056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 23:11:23.427068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.427078 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.427088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 23:11:23.427099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.427109 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.427118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 23:11:23.427133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.427149 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.427159 | orchestrator | 2025-06-01 23:11:23.427169 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-01 23:11:23.427179 | orchestrator | Sunday 01 June 2025 23:06:14 +0000 (0:00:02.601) 0:04:43.909 *********** 2025-06-01 23:11:23.427189 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.427198 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.427208 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.427217 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:11:23.427227 | orchestrator | 2025-06-01 23:11:23.427237 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-01 23:11:23.427271 | orchestrator | Sunday 01 June 2025 23:06:17 +0000 (0:00:02.516) 0:04:46.425 *********** 2025-06-01 23:11:23.427282 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 23:11:23.427292 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-01 23:11:23.427302 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-01 23:11:23.427311 | orchestrator | 2025-06-01 23:11:23.427321 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-01 23:11:23.427331 | orchestrator | Sunday 01 June 2025 23:06:19 +0000 (0:00:02.011) 0:04:48.437 *********** 2025-06-01 23:11:23.427340 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 23:11:23.427350 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-01 23:11:23.427360 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-01 23:11:23.427370 | orchestrator | 2025-06-01 23:11:23.427380 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-01 23:11:23.427389 | orchestrator | Sunday 01 June 2025 23:06:20 +0000 (0:00:01.435) 0:04:49.873 *********** 2025-06-01 23:11:23.427399 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:11:23.427409 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:11:23.427419 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:11:23.427428 | orchestrator | 2025-06-01 23:11:23.427438 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-01 23:11:23.427447 | orchestrator | Sunday 01 June 2025 23:06:21 +0000 (0:00:00.410) 0:04:50.283 *********** 2025-06-01 23:11:23.427457 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:11:23.427467 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:11:23.427477 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:11:23.427486 | orchestrator | 2025-06-01 23:11:23.427496 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-01 23:11:23.427524 | orchestrator | Sunday 01 June 2025 23:06:21 +0000 (0:00:00.521) 0:04:50.805 *********** 2025-06-01 23:11:23.427535 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-01 23:11:23.427544 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-01 23:11:23.427554 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-01 23:11:23.427564 | orchestrator | 2025-06-01 23:11:23.427573 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-01 23:11:23.427583 | orchestrator | Sunday 01 June 2025 23:06:23 +0000 (0:00:01.494) 0:04:52.299 *********** 2025-06-01 23:11:23.427593 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-01 23:11:23.427603 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-01 23:11:23.427612 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-01 23:11:23.427622 | orchestrator | 2025-06-01 23:11:23.427632 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-01 23:11:23.427641 | orchestrator | Sunday 01 June 2025 23:06:24 +0000 (0:00:01.641) 0:04:53.941 *********** 2025-06-01 23:11:23.427651 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-01 23:11:23.427661 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-01 23:11:23.427670 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-01 23:11:23.427680 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-01 23:11:23.427695 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-01 23:11:23.427705 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-01 23:11:23.427714 | orchestrator | 2025-06-01 23:11:23.427724 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-01 23:11:23.427734 | orchestrator | Sunday 01 June 2025 23:06:30 +0000 (0:00:06.145) 0:05:00.087 *********** 2025-06-01 23:11:23.427743 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.427753 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.427763 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.427772 | orchestrator | 2025-06-01 23:11:23.427782 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-01 23:11:23.427791 | orchestrator | Sunday 01 June 2025 23:06:31 +0000 (0:00:00.311) 0:05:00.399 *********** 2025-06-01 23:11:23.427801 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.427811 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.427872 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.427884 | orchestrator | 2025-06-01 23:11:23.427893 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-01 23:11:23.427903 | orchestrator | Sunday 01 June 2025 23:06:31 +0000 (0:00:00.263) 0:05:00.662 *********** 2025-06-01 23:11:23.427913 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:11:23.427922 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:11:23.427937 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:11:23.427947 | orchestrator | 2025-06-01 23:11:23.427957 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-01 23:11:23.427967 | orchestrator | Sunday 01 June 2025 23:06:33 +0000 (0:00:02.494) 0:05:03.157 *********** 2025-06-01 23:11:23.427977 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-01 23:11:23.427987 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-01 23:11:23.427997 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-01 23:11:23.428007 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-01 23:11:23.428018 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-01 23:11:23.428027 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-01 23:11:23.428037 | orchestrator | 2025-06-01 23:11:23.428078 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-01 23:11:23.428087 | orchestrator | Sunday 01 June 2025 23:06:38 +0000 (0:00:04.820) 0:05:07.977 *********** 2025-06-01 23:11:23.428095 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 23:11:23.428104 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 23:11:23.428111 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 23:11:23.428119 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 23:11:23.428127 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:11:23.428135 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 23:11:23.428143 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:11:23.428151 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 23:11:23.428159 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:11:23.428167 | orchestrator | 2025-06-01 23:11:23.428175 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-01 23:11:23.428183 | orchestrator | Sunday 01 June 2025 23:06:43 +0000 (0:00:04.442) 0:05:12.419 *********** 2025-06-01 23:11:23.428191 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.428247 | orchestrator | 2025-06-01 23:11:23.428256 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-01 23:11:23.428264 | orchestrator | Sunday 01 June 2025 23:06:43 +0000 (0:00:00.281) 0:05:12.701 *********** 2025-06-01 23:11:23.428272 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.428280 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.428288 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.428296 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.428303 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.428311 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.428319 | orchestrator | 2025-06-01 23:11:23.428327 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-01 23:11:23.428335 | orchestrator | Sunday 01 June 2025 23:06:44 +0000 (0:00:01.446) 0:05:14.147 *********** 2025-06-01 23:11:23.428343 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 23:11:23.428351 | orchestrator | 2025-06-01 23:11:23.428359 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-01 23:11:23.428367 | orchestrator | Sunday 01 June 2025 23:06:45 +0000 (0:00:00.705) 0:05:14.852 *********** 2025-06-01 23:11:23.428374 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.428382 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.428390 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.428398 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.428406 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.428413 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.428421 | orchestrator | 2025-06-01 23:11:23.428429 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-01 23:11:23.428437 | orchestrator | Sunday 01 June 2025 23:06:46 +0000 (0:00:00.821) 0:05:15.674 *********** 2025-06-01 23:11:23.428446 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428495 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428590 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428616 | orchestrator | 2025-06-01 23:11:23.428624 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-01 23:11:23.428632 | orchestrator | Sunday 01 June 2025 23:06:52 +0000 (0:00:06.207) 0:05:21.882 *********** 2025-06-01 23:11:23.428640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 23:11:23.428649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 23:11:23.428657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 23:11:23.428669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 23:11:23.428681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 23:11:23.428697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 23:11:23.428706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428714 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428735 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.428794 | orchestrator | 2025-06-01 23:11:23.428802 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-01 23:11:23.428810 | orchestrator | Sunday 01 June 2025 23:06:59 +0000 (0:00:06.736) 0:05:28.619 *********** 2025-06-01 23:11:23.428832 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.428841 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.428849 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.428857 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.428865 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.428878 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.428886 | orchestrator | 2025-06-01 23:11:23.428898 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-01 23:11:23.428906 | orchestrator | Sunday 01 June 2025 23:07:01 +0000 (0:00:02.339) 0:05:30.959 *********** 2025-06-01 23:11:23.428913 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-01 23:11:23.428921 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-01 23:11:23.428929 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-01 23:11:23.428937 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-01 23:11:23.428945 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-01 23:11:23.428953 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-01 23:11:23.428961 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-01 23:11:23.428969 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.428976 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-01 23:11:23.428984 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.428992 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-01 23:11:23.429004 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.429012 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-01 23:11:23.429020 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-01 23:11:23.429028 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-01 23:11:23.429036 | orchestrator | 2025-06-01 23:11:23.429044 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-01 23:11:23.429052 | orchestrator | Sunday 01 June 2025 23:07:06 +0000 (0:00:04.443) 0:05:35.402 *********** 2025-06-01 23:11:23.429060 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.429068 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.429076 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.429084 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.429091 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.429099 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.429107 | orchestrator | 2025-06-01 23:11:23.429115 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-01 23:11:23.429123 | orchestrator | Sunday 01 June 2025 23:07:06 +0000 (0:00:00.725) 0:05:36.128 *********** 2025-06-01 23:11:23.429131 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-01 23:11:23.429139 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-01 23:11:23.429147 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-01 23:11:23.429155 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-01 23:11:23.429163 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-01 23:11:23.429171 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-01 23:11:23.429179 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-01 23:11:23.429187 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-01 23:11:23.429199 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-01 23:11:23.429207 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-01 23:11:23.429215 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.429223 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-01 23:11:23.429231 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.429239 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-01 23:11:23.429247 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.429255 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-01 23:11:23.429263 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-01 23:11:23.429270 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-01 23:11:23.429278 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-01 23:11:23.429300 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-01 23:11:23.429308 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-01 23:11:23.429316 | orchestrator | 2025-06-01 23:11:23.429324 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-01 23:11:23.429332 | orchestrator | Sunday 01 June 2025 23:07:11 +0000 (0:00:04.838) 0:05:40.966 *********** 2025-06-01 23:11:23.429340 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 23:11:23.429348 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 23:11:23.429356 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 23:11:23.429364 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-01 23:11:23.429371 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-01 23:11:23.429379 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 23:11:23.429387 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-01 23:11:23.429395 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 23:11:23.429406 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 23:11:23.429414 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 23:11:23.429422 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 23:11:23.429430 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-01 23:11:23.429438 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.429446 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 23:11:23.429454 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-01 23:11:23.429461 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.429469 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 23:11:23.429477 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-01 23:11:23.429491 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.429499 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 23:11:23.429507 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 23:11:23.429515 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 23:11:23.429523 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 23:11:23.429530 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 23:11:23.429538 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 23:11:23.429546 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 23:11:23.429554 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 23:11:23.429561 | orchestrator | 2025-06-01 23:11:23.429569 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-01 23:11:23.429577 | orchestrator | Sunday 01 June 2025 23:07:19 +0000 (0:00:07.811) 0:05:48.777 *********** 2025-06-01 23:11:23.429585 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.429593 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.429601 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.429608 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.429616 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.429624 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.429632 | orchestrator | 2025-06-01 23:11:23.429639 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-01 23:11:23.429647 | orchestrator | Sunday 01 June 2025 23:07:20 +0000 (0:00:00.602) 0:05:49.380 *********** 2025-06-01 23:11:23.429655 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.429663 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.429670 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.429678 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.429686 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.429693 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.429701 | orchestrator | 2025-06-01 23:11:23.429709 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-01 23:11:23.429717 | orchestrator | Sunday 01 June 2025 23:07:21 +0000 (0:00:00.852) 0:05:50.233 *********** 2025-06-01 23:11:23.429724 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.429732 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.429740 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.429748 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:11:23.429755 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:11:23.429763 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:11:23.429771 | orchestrator | 2025-06-01 23:11:23.429779 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-01 23:11:23.429786 | orchestrator | Sunday 01 June 2025 23:07:22 +0000 (0:00:01.888) 0:05:52.121 *********** 2025-06-01 23:11:23.429798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 23:11:23.429816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 23:11:23.429839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 23:11:23.429848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.429857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 23:11:23.429865 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.429877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.429886 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.429898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-01 23:11:23.429913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-01 23:11:23.429921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.429930 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.429938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 23:11:23.429946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.429954 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.429966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 23:11:23.429982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-01 23:11:23.429991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.429999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:11:23.430007 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.430036 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.430046 | orchestrator | 2025-06-01 23:11:23.430054 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-01 23:11:23.430062 | orchestrator | Sunday 01 June 2025 23:07:24 +0000 (0:00:01.578) 0:05:53.699 *********** 2025-06-01 23:11:23.430070 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-01 23:11:23.430078 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-01 23:11:23.430086 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.430093 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-01 23:11:23.430101 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-01 23:11:23.430109 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.430117 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-01 23:11:23.430125 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-01 23:11:23.430133 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.430141 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-01 23:11:23.430149 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-01 23:11:23.430156 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.430164 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-01 23:11:23.430172 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-01 23:11:23.430180 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.430187 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-01 23:11:23.430195 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-01 23:11:23.430203 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.430216 | orchestrator | 2025-06-01 23:11:23.430224 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-01 23:11:23.430232 | orchestrator | Sunday 01 June 2025 23:07:25 +0000 (0:00:00.692) 0:05:54.392 *********** 2025-06-01 23:11:23.430246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430278 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430286 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430395 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-01 23:11:23.430403 | orchestrator | 2025-06-01 23:11:23.430412 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-01 23:11:23.430420 | orchestrator | Sunday 01 June 2025 23:07:28 +0000 (0:00:03.024) 0:05:57.417 *********** 2025-06-01 23:11:23.430428 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.430436 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.430443 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.430451 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.430459 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.430467 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.430475 | orchestrator | 2025-06-01 23:11:23.430483 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-01 23:11:23.430491 | orchestrator | Sunday 01 June 2025 23:07:28 +0000 (0:00:00.614) 0:05:58.032 *********** 2025-06-01 23:11:23.430498 | orchestrator | 2025-06-01 23:11:23.430506 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-01 23:11:23.430514 | orchestrator | Sunday 01 June 2025 23:07:29 +0000 (0:00:00.349) 0:05:58.382 *********** 2025-06-01 23:11:23.430526 | orchestrator | 2025-06-01 23:11:23.430534 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-01 23:11:23.430542 | orchestrator | Sunday 01 June 2025 23:07:29 +0000 (0:00:00.136) 0:05:58.518 *********** 2025-06-01 23:11:23.430550 | orchestrator | 2025-06-01 23:11:23.430558 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-01 23:11:23.430565 | orchestrator | Sunday 01 June 2025 23:07:29 +0000 (0:00:00.163) 0:05:58.682 *********** 2025-06-01 23:11:23.430573 | orchestrator | 2025-06-01 23:11:23.430581 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-01 23:11:23.430589 | orchestrator | Sunday 01 June 2025 23:07:29 +0000 (0:00:00.129) 0:05:58.812 *********** 2025-06-01 23:11:23.430596 | orchestrator | 2025-06-01 23:11:23.430605 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-01 23:11:23.430613 | orchestrator | Sunday 01 June 2025 23:07:29 +0000 (0:00:00.133) 0:05:58.946 *********** 2025-06-01 23:11:23.430620 | orchestrator | 2025-06-01 23:11:23.430628 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-01 23:11:23.430636 | orchestrator | Sunday 01 June 2025 23:07:29 +0000 (0:00:00.126) 0:05:59.073 *********** 2025-06-01 23:11:23.430644 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.430652 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:11:23.430660 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:11:23.430667 | orchestrator | 2025-06-01 23:11:23.430675 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-01 23:11:23.430683 | orchestrator | Sunday 01 June 2025 23:07:42 +0000 (0:00:13.005) 0:06:12.079 *********** 2025-06-01 23:11:23.430691 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.430699 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:11:23.430706 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:11:23.430714 | orchestrator | 2025-06-01 23:11:23.430722 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-01 23:11:23.430730 | orchestrator | Sunday 01 June 2025 23:08:02 +0000 (0:00:19.203) 0:06:31.283 *********** 2025-06-01 23:11:23.430742 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:11:23.430750 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:11:23.430757 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:11:23.430765 | orchestrator | 2025-06-01 23:11:23.430773 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-01 23:11:23.430781 | orchestrator | Sunday 01 June 2025 23:08:23 +0000 (0:00:21.164) 0:06:52.447 *********** 2025-06-01 23:11:23.430789 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:11:23.430797 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:11:23.430804 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:11:23.430812 | orchestrator | 2025-06-01 23:11:23.430859 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-01 23:11:23.430868 | orchestrator | Sunday 01 June 2025 23:09:09 +0000 (0:00:46.647) 0:07:39.094 *********** 2025-06-01 23:11:23.430876 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:11:23.430883 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:11:23.430891 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:11:23.430899 | orchestrator | 2025-06-01 23:11:23.430907 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-01 23:11:23.430915 | orchestrator | Sunday 01 June 2025 23:09:10 +0000 (0:00:01.098) 0:07:40.193 *********** 2025-06-01 23:11:23.430922 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:11:23.430930 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:11:23.430938 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:11:23.430945 | orchestrator | 2025-06-01 23:11:23.430953 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-01 23:11:23.430961 | orchestrator | Sunday 01 June 2025 23:09:11 +0000 (0:00:00.903) 0:07:41.096 *********** 2025-06-01 23:11:23.430973 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:11:23.430981 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:11:23.430995 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:11:23.431003 | orchestrator | 2025-06-01 23:11:23.431011 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-01 23:11:23.431019 | orchestrator | Sunday 01 June 2025 23:09:41 +0000 (0:00:29.122) 0:08:10.218 *********** 2025-06-01 23:11:23.431027 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.431034 | orchestrator | 2025-06-01 23:11:23.431042 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-01 23:11:23.431050 | orchestrator | Sunday 01 June 2025 23:09:41 +0000 (0:00:00.152) 0:08:10.371 *********** 2025-06-01 23:11:23.431057 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.431065 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.431073 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.431081 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.431089 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.431097 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-01 23:11:23.431105 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (19 retries left). 2025-06-01 23:11:23.431113 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (18 retries left). 2025-06-01 23:11:23.431121 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 23:11:23.431129 | orchestrator | 2025-06-01 23:11:23.431137 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-01 23:11:23.431145 | orchestrator | Sunday 01 June 2025 23:10:33 +0000 (0:00:52.642) 0:09:03.014 *********** 2025-06-01 23:11:23.431153 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.431161 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.431168 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.431176 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.431184 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.431191 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.431199 | orchestrator | 2025-06-01 23:11:23.431207 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-01 23:11:23.431215 | orchestrator | Sunday 01 June 2025 23:10:45 +0000 (0:00:11.572) 0:09:14.586 *********** 2025-06-01 23:11:23.431223 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.431231 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.431238 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.431246 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.431254 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.431262 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-06-01 23:11:23.431270 | orchestrator | 2025-06-01 23:11:23.431278 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-01 23:11:23.431285 | orchestrator | Sunday 01 June 2025 23:10:50 +0000 (0:00:05.543) 0:09:20.130 *********** 2025-06-01 23:11:23.431293 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 23:11:23.431301 | orchestrator | 2025-06-01 23:11:23.431309 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-01 23:11:23.431317 | orchestrator | Sunday 01 June 2025 23:11:02 +0000 (0:00:11.429) 0:09:31.559 *********** 2025-06-01 23:11:23.431325 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 23:11:23.431332 | orchestrator | 2025-06-01 23:11:23.431339 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-01 23:11:23.431345 | orchestrator | Sunday 01 June 2025 23:11:03 +0000 (0:00:01.330) 0:09:32.890 *********** 2025-06-01 23:11:23.431352 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.431358 | orchestrator | 2025-06-01 23:11:23.431365 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-01 23:11:23.431379 | orchestrator | Sunday 01 June 2025 23:11:04 +0000 (0:00:01.322) 0:09:34.213 *********** 2025-06-01 23:11:23.431385 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 23:11:23.431392 | orchestrator | 2025-06-01 23:11:23.431398 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-01 23:11:23.431408 | orchestrator | Sunday 01 June 2025 23:11:15 +0000 (0:00:10.420) 0:09:44.633 *********** 2025-06-01 23:11:23.431415 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:11:23.431422 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:11:23.431429 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:11:23.431436 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:11:23.431442 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:11:23.431449 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:11:23.431455 | orchestrator | 2025-06-01 23:11:23.431462 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-01 23:11:23.431469 | orchestrator | 2025-06-01 23:11:23.431475 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-01 23:11:23.431482 | orchestrator | Sunday 01 June 2025 23:11:17 +0000 (0:00:01.773) 0:09:46.406 *********** 2025-06-01 23:11:23.431489 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:11:23.431495 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:11:23.431502 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:11:23.431509 | orchestrator | 2025-06-01 23:11:23.431515 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-01 23:11:23.431522 | orchestrator | 2025-06-01 23:11:23.431528 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-01 23:11:23.431535 | orchestrator | Sunday 01 June 2025 23:11:18 +0000 (0:00:01.153) 0:09:47.560 *********** 2025-06-01 23:11:23.431542 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.431548 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.431555 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.431562 | orchestrator | 2025-06-01 23:11:23.431568 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-01 23:11:23.431575 | orchestrator | 2025-06-01 23:11:23.431585 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-01 23:11:23.431591 | orchestrator | Sunday 01 June 2025 23:11:18 +0000 (0:00:00.529) 0:09:48.089 *********** 2025-06-01 23:11:23.431598 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-01 23:11:23.431605 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-01 23:11:23.431611 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-01 23:11:23.431618 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-01 23:11:23.431625 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-01 23:11:23.431632 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-01 23:11:23.431639 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:11:23.431645 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-01 23:11:23.431652 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-01 23:11:23.431659 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-01 23:11:23.431665 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-01 23:11:23.431672 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-01 23:11:23.431679 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-01 23:11:23.431685 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:11:23.431692 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-01 23:11:23.431699 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-01 23:11:23.431705 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-01 23:11:23.431712 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-01 23:11:23.431719 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-01 23:11:23.431730 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-01 23:11:23.431737 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:11:23.431743 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-01 23:11:23.431750 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-01 23:11:23.431757 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-01 23:11:23.431764 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-01 23:11:23.431770 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-01 23:11:23.431777 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-01 23:11:23.431783 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.431790 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-01 23:11:23.431797 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-01 23:11:23.431803 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-01 23:11:23.431810 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-01 23:11:23.431817 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-01 23:11:23.431838 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-01 23:11:23.431844 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.431851 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-01 23:11:23.431858 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-01 23:11:23.431864 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-01 23:11:23.431871 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-01 23:11:23.431877 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-01 23:11:23.431884 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-01 23:11:23.431891 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.431897 | orchestrator | 2025-06-01 23:11:23.431904 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-01 23:11:23.431910 | orchestrator | 2025-06-01 23:11:23.431917 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-01 23:11:23.431923 | orchestrator | Sunday 01 June 2025 23:11:20 +0000 (0:00:01.340) 0:09:49.430 *********** 2025-06-01 23:11:23.431930 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-01 23:11:23.431937 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-01 23:11:23.431944 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.431950 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-01 23:11:23.431957 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-01 23:11:23.431964 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.431971 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-01 23:11:23.431977 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-01 23:11:23.431984 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.431990 | orchestrator | 2025-06-01 23:11:23.431997 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-01 23:11:23.432004 | orchestrator | 2025-06-01 23:11:23.432010 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-01 23:11:23.432017 | orchestrator | Sunday 01 June 2025 23:11:21 +0000 (0:00:00.795) 0:09:50.225 *********** 2025-06-01 23:11:23.432023 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.432030 | orchestrator | 2025-06-01 23:11:23.432037 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-01 23:11:23.432043 | orchestrator | 2025-06-01 23:11:23.432050 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-01 23:11:23.432057 | orchestrator | Sunday 01 June 2025 23:11:21 +0000 (0:00:00.699) 0:09:50.924 *********** 2025-06-01 23:11:23.432067 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:11:23.432074 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:11:23.432081 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:11:23.432087 | orchestrator | 2025-06-01 23:11:23.432097 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:11:23.432133 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:11:23.432141 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-01 23:11:23.432148 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-01 23:11:23.432155 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-01 23:11:23.432162 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-01 23:11:23.432169 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-01 23:11:23.432175 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-01 23:11:23.432182 | orchestrator | 2025-06-01 23:11:23.432189 | orchestrator | 2025-06-01 23:11:23.432195 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:11:23.432202 | orchestrator | Sunday 01 June 2025 23:11:22 +0000 (0:00:00.440) 0:09:51.365 *********** 2025-06-01 23:11:23.432209 | orchestrator | =============================================================================== 2025-06-01 23:11:23.432216 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 52.64s 2025-06-01 23:11:23.432222 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 46.65s 2025-06-01 23:11:23.432229 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.12s 2025-06-01 23:11:23.432236 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 28.14s 2025-06-01 23:11:23.432242 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.18s 2025-06-01 23:11:23.432249 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.16s 2025-06-01 23:11:23.432256 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.81s 2025-06-01 23:11:23.432262 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.20s 2025-06-01 23:11:23.432269 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.06s 2025-06-01 23:11:23.432276 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.14s 2025-06-01 23:11:23.432282 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.01s 2025-06-01 23:11:23.432289 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.89s 2025-06-01 23:11:23.432296 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.75s 2025-06-01 23:11:23.432302 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.57s 2025-06-01 23:11:23.432309 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.43s 2025-06-01 23:11:23.432316 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.22s 2025-06-01 23:11:23.432322 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.00s 2025-06-01 23:11:23.432329 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.42s 2025-06-01 23:11:23.432336 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.30s 2025-06-01 23:11:23.432350 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.33s 2025-06-01 23:11:23.432357 | orchestrator | 2025-06-01 23:11:23 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:23.432364 | orchestrator | 2025-06-01 23:11:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:26.457713 | orchestrator | 2025-06-01 23:11:26 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:26.458615 | orchestrator | 2025-06-01 23:11:26 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:26.458650 | orchestrator | 2025-06-01 23:11:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:29.501621 | orchestrator | 2025-06-01 23:11:29 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:29.501721 | orchestrator | 2025-06-01 23:11:29 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:29.501736 | orchestrator | 2025-06-01 23:11:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:32.550987 | orchestrator | 2025-06-01 23:11:32 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:32.552261 | orchestrator | 2025-06-01 23:11:32 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:32.552298 | orchestrator | 2025-06-01 23:11:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:35.595076 | orchestrator | 2025-06-01 23:11:35 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:35.596287 | orchestrator | 2025-06-01 23:11:35 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:35.596319 | orchestrator | 2025-06-01 23:11:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:38.640927 | orchestrator | 2025-06-01 23:11:38 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:38.641675 | orchestrator | 2025-06-01 23:11:38 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:38.641712 | orchestrator | 2025-06-01 23:11:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:41.687859 | orchestrator | 2025-06-01 23:11:41 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:41.689056 | orchestrator | 2025-06-01 23:11:41 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:41.689092 | orchestrator | 2025-06-01 23:11:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:44.740432 | orchestrator | 2025-06-01 23:11:44 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:44.742503 | orchestrator | 2025-06-01 23:11:44 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:44.742539 | orchestrator | 2025-06-01 23:11:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:47.796547 | orchestrator | 2025-06-01 23:11:47 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:47.798878 | orchestrator | 2025-06-01 23:11:47 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:47.798911 | orchestrator | 2025-06-01 23:11:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:50.850724 | orchestrator | 2025-06-01 23:11:50 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:50.851714 | orchestrator | 2025-06-01 23:11:50 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:50.851901 | orchestrator | 2025-06-01 23:11:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:53.900363 | orchestrator | 2025-06-01 23:11:53 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:53.901878 | orchestrator | 2025-06-01 23:11:53 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:53.901912 | orchestrator | 2025-06-01 23:11:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:56.948608 | orchestrator | 2025-06-01 23:11:56 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:56.950112 | orchestrator | 2025-06-01 23:11:56 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:56.950227 | orchestrator | 2025-06-01 23:11:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:11:59.989700 | orchestrator | 2025-06-01 23:11:59 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:11:59.990738 | orchestrator | 2025-06-01 23:11:59 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:11:59.990776 | orchestrator | 2025-06-01 23:11:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:03.047199 | orchestrator | 2025-06-01 23:12:03 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:12:03.048873 | orchestrator | 2025-06-01 23:12:03 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:03.048909 | orchestrator | 2025-06-01 23:12:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:06.098094 | orchestrator | 2025-06-01 23:12:06 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:12:06.099728 | orchestrator | 2025-06-01 23:12:06 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:06.099761 | orchestrator | 2025-06-01 23:12:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:09.145015 | orchestrator | 2025-06-01 23:12:09 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:12:09.146319 | orchestrator | 2025-06-01 23:12:09 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:09.146353 | orchestrator | 2025-06-01 23:12:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:12.187208 | orchestrator | 2025-06-01 23:12:12 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:12:12.188418 | orchestrator | 2025-06-01 23:12:12 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:12.188448 | orchestrator | 2025-06-01 23:12:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:15.239073 | orchestrator | 2025-06-01 23:12:15 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state STARTED 2025-06-01 23:12:15.239498 | orchestrator | 2025-06-01 23:12:15 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:15.241181 | orchestrator | 2025-06-01 23:12:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:18.296670 | orchestrator | 2025-06-01 23:12:18 | INFO  | Task d9670a66-345b-401a-8d22-75f75a973633 is in state SUCCESS 2025-06-01 23:12:18.298454 | orchestrator | 2025-06-01 23:12:18.298495 | orchestrator | 2025-06-01 23:12:18.298508 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:12:18.298519 | orchestrator | 2025-06-01 23:12:18.298529 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:12:18.298540 | orchestrator | Sunday 01 June 2025 23:09:58 +0000 (0:00:00.276) 0:00:00.276 *********** 2025-06-01 23:12:18.298576 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:12:18.298587 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:12:18.298597 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:12:18.298607 | orchestrator | 2025-06-01 23:12:18.298616 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:12:18.298626 | orchestrator | Sunday 01 June 2025 23:09:58 +0000 (0:00:00.307) 0:00:00.583 *********** 2025-06-01 23:12:18.299243 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-01 23:12:18.299259 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-01 23:12:18.299269 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-01 23:12:18.299279 | orchestrator | 2025-06-01 23:12:18.299289 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-01 23:12:18.299299 | orchestrator | 2025-06-01 23:12:18.299309 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-01 23:12:18.299319 | orchestrator | Sunday 01 June 2025 23:09:58 +0000 (0:00:00.480) 0:00:01.064 *********** 2025-06-01 23:12:18.299329 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:12:18.299339 | orchestrator | 2025-06-01 23:12:18.299349 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-01 23:12:18.299359 | orchestrator | Sunday 01 June 2025 23:09:59 +0000 (0:00:00.630) 0:00:01.694 *********** 2025-06-01 23:12:18.299372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.299401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.299412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.299422 | orchestrator | 2025-06-01 23:12:18.299432 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-01 23:12:18.299442 | orchestrator | Sunday 01 June 2025 23:10:00 +0000 (0:00:00.835) 0:00:02.530 *********** 2025-06-01 23:12:18.299452 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-01 23:12:18.299463 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-01 23:12:18.299485 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:12:18.299495 | orchestrator | 2025-06-01 23:12:18.299505 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-01 23:12:18.299514 | orchestrator | Sunday 01 June 2025 23:10:01 +0000 (0:00:00.844) 0:00:03.374 *********** 2025-06-01 23:12:18.299544 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:12:18.299554 | orchestrator | 2025-06-01 23:12:18.299563 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-01 23:12:18.299573 | orchestrator | Sunday 01 June 2025 23:10:01 +0000 (0:00:00.749) 0:00:04.124 *********** 2025-06-01 23:12:18.299626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.299639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.299649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.299659 | orchestrator | 2025-06-01 23:12:18.299675 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-01 23:12:18.299685 | orchestrator | Sunday 01 June 2025 23:10:03 +0000 (0:00:01.276) 0:00:05.401 *********** 2025-06-01 23:12:18.299695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 23:12:18.299706 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:12:18.299716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 23:12:18.299733 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:12:18.299773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 23:12:18.299785 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:12:18.299795 | orchestrator | 2025-06-01 23:12:18.299805 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-01 23:12:18.299814 | orchestrator | Sunday 01 June 2025 23:10:03 +0000 (0:00:00.378) 0:00:05.779 *********** 2025-06-01 23:12:18.299826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 23:12:18.299858 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:12:18.299870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 23:12:18.299882 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:12:18.299898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 23:12:18.299910 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:12:18.299921 | orchestrator | 2025-06-01 23:12:18.299932 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-01 23:12:18.299950 | orchestrator | Sunday 01 June 2025 23:10:04 +0000 (0:00:00.873) 0:00:06.653 *********** 2025-06-01 23:12:18.299962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.300004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.300018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.300030 | orchestrator | 2025-06-01 23:12:18.300041 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-01 23:12:18.300052 | orchestrator | Sunday 01 June 2025 23:10:05 +0000 (0:00:01.228) 0:00:07.881 *********** 2025-06-01 23:12:18.300064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.300081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.300093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.300111 | orchestrator | 2025-06-01 23:12:18.300123 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-01 23:12:18.300134 | orchestrator | Sunday 01 June 2025 23:10:07 +0000 (0:00:01.343) 0:00:09.225 *********** 2025-06-01 23:12:18.300145 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:12:18.300157 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:12:18.300168 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:12:18.300179 | orchestrator | 2025-06-01 23:12:18.300188 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-01 23:12:18.300198 | orchestrator | Sunday 01 June 2025 23:10:07 +0000 (0:00:00.614) 0:00:09.839 *********** 2025-06-01 23:12:18.300208 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-01 23:12:18.300217 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-01 23:12:18.300227 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-01 23:12:18.300237 | orchestrator | 2025-06-01 23:12:18.300247 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-01 23:12:18.300256 | orchestrator | Sunday 01 June 2025 23:10:08 +0000 (0:00:01.250) 0:00:11.090 *********** 2025-06-01 23:12:18.300266 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-01 23:12:18.300303 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-01 23:12:18.300315 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-01 23:12:18.300324 | orchestrator | 2025-06-01 23:12:18.300334 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-01 23:12:18.300344 | orchestrator | Sunday 01 June 2025 23:10:10 +0000 (0:00:01.264) 0:00:12.354 *********** 2025-06-01 23:12:18.300354 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:12:18.300363 | orchestrator | 2025-06-01 23:12:18.300373 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-01 23:12:18.300383 | orchestrator | Sunday 01 June 2025 23:10:10 +0000 (0:00:00.783) 0:00:13.138 *********** 2025-06-01 23:12:18.300393 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-01 23:12:18.300402 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-01 23:12:18.300412 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:12:18.300421 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:12:18.300431 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:12:18.300441 | orchestrator | 2025-06-01 23:12:18.300450 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-01 23:12:18.300460 | orchestrator | Sunday 01 June 2025 23:10:11 +0000 (0:00:00.688) 0:00:13.826 *********** 2025-06-01 23:12:18.300470 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:12:18.300480 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:12:18.300489 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:12:18.300499 | orchestrator | 2025-06-01 23:12:18.300509 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-01 23:12:18.300519 | orchestrator | Sunday 01 June 2025 23:10:12 +0000 (0:00:00.614) 0:00:14.441 *********** 2025-06-01 23:12:18.300529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1093724, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3910275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1093724, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3910275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1093724, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3910275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1093701, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3840275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1093701, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3840275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1093701, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3840275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1093690, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3820274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1093690, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3820274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1093690, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3820274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1093715, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3870275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1093715, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3870275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1093715, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3870275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1093669, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3780274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1093669, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3780274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1093669, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3780274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1093693, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3820274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1093693, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3820274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1093693, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3820274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1093714, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3850274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1093714, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3850274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1093714, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3850274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1093663, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3770275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1093663, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3770275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1093663, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3770275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1093639, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3700273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1093639, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3700273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1093639, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3700273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.300999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1093676, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3790274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1093676, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3790274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1093676, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3790274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1093649, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3730273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1093649, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3730273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1093649, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3730273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1093708, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3850274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1093708, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3850274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1093708, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3850274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1093679, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3800275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1093679, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3800275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1093679, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3800275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1093721, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3890276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1093721, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3890276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1093721, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3890276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1093660, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3750274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1093660, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3750274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1093660, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3750274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1093697, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3830276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1093697, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3830276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1093697, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3830276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1093641, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3710275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1093641, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3710275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1093641, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3710275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1093655, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3750274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1093655, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3750274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1093655, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3750274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1093684, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3810275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1093684, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3810275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1093684, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3810275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1093786, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4120276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1093786, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4120276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1093786, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4120276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1093768, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4010277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1093768, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4010277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1093768, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4010277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1093732, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3920276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1093732, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3920276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1093732, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3920276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1093809, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5220287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1093809, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5220287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1093809, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5220287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1093734, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3920276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1093734, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3920276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1093734, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3920276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1093804, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4170277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1093804, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4170277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1093804, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4170277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094133, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5240285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094133, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5240285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094133, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5240285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1093795, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4140277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1093795, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4140277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1093795, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4140277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1093802, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4160278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1093802, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4160278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1093802, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4160278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1093737, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3930275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1093737, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3930275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1093737, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3930275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1093772, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4020276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1093772, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4020276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1093772, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4020276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094134, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5250287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094134, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5250287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094134, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5250287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1093808, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4180276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1093808, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4180276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1093808, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4180276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1093743, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3960276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1093743, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3960276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.301992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1093743, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3960276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1093740, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3940275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1093740, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3940275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1093740, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3940275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1093754, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3970275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1093754, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3970275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1093754, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.3970275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1093759, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4000275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1093759, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4000275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1093759, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4000275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1093776, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4030275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1093776, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4030275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1093776, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4030275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1093799, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4150276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1093799, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4150276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1093783, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4050276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1093799, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4150276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1093783, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4050276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094136, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5260286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1093783, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.4050276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094136, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5260286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094136, 'dev': 135, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748816543.5260286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-01 23:12:18.302258 | orchestrator | 2025-06-01 23:12:18.302266 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-01 23:12:18.302274 | orchestrator | Sunday 01 June 2025 23:10:50 +0000 (0:00:38.528) 0:00:52.970 *********** 2025-06-01 23:12:18.302287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.302301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.302309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:12:18.302317 | orchestrator | 2025-06-01 23:12:18.302325 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-01 23:12:18.302333 | orchestrator | Sunday 01 June 2025 23:10:51 +0000 (0:00:00.992) 0:00:53.963 *********** 2025-06-01 23:12:18.302341 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:12:18.302349 | orchestrator | 2025-06-01 23:12:18.302357 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-01 23:12:18.302369 | orchestrator | Sunday 01 June 2025 23:10:53 +0000 (0:00:02.155) 0:00:56.118 *********** 2025-06-01 23:12:18.302377 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:12:18.302385 | orchestrator | 2025-06-01 23:12:18.302393 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-01 23:12:18.302401 | orchestrator | Sunday 01 June 2025 23:10:55 +0000 (0:00:02.087) 0:00:58.205 *********** 2025-06-01 23:12:18.302408 | orchestrator | 2025-06-01 23:12:18.302416 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-01 23:12:18.302424 | orchestrator | Sunday 01 June 2025 23:10:56 +0000 (0:00:00.207) 0:00:58.413 *********** 2025-06-01 23:12:18.302432 | orchestrator | 2025-06-01 23:12:18.302440 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-01 23:12:18.302448 | orchestrator | Sunday 01 June 2025 23:10:56 +0000 (0:00:00.057) 0:00:58.470 *********** 2025-06-01 23:12:18.302456 | orchestrator | 2025-06-01 23:12:18.302463 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-01 23:12:18.302471 | orchestrator | Sunday 01 June 2025 23:10:56 +0000 (0:00:00.092) 0:00:58.562 *********** 2025-06-01 23:12:18.302479 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:12:18.302487 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:12:18.302495 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:12:18.302503 | orchestrator | 2025-06-01 23:12:18.302511 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-01 23:12:18.302519 | orchestrator | Sunday 01 June 2025 23:10:58 +0000 (0:00:01.732) 0:01:00.295 *********** 2025-06-01 23:12:18.302531 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:12:18.302539 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:12:18.302547 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-01 23:12:18.302555 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-01 23:12:18.302563 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-01 23:12:18.302571 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:12:18.302579 | orchestrator | 2025-06-01 23:12:18.302587 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-01 23:12:18.302594 | orchestrator | Sunday 01 June 2025 23:11:36 +0000 (0:00:38.196) 0:01:38.492 *********** 2025-06-01 23:12:18.302602 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:12:18.302610 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:12:18.302618 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:12:18.302626 | orchestrator | 2025-06-01 23:12:18.302634 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-01 23:12:18.302642 | orchestrator | Sunday 01 June 2025 23:12:09 +0000 (0:00:33.682) 0:02:12.174 *********** 2025-06-01 23:12:18.302649 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:12:18.302657 | orchestrator | 2025-06-01 23:12:18.302665 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-01 23:12:18.302673 | orchestrator | Sunday 01 June 2025 23:12:12 +0000 (0:00:02.426) 0:02:14.600 *********** 2025-06-01 23:12:18.302685 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:12:18.302693 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:12:18.302701 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:12:18.302709 | orchestrator | 2025-06-01 23:12:18.302717 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-01 23:12:18.302724 | orchestrator | Sunday 01 June 2025 23:12:12 +0000 (0:00:00.329) 0:02:14.930 *********** 2025-06-01 23:12:18.302734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-01 23:12:18.302742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-01 23:12:18.302752 | orchestrator | 2025-06-01 23:12:18.302760 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-01 23:12:18.302768 | orchestrator | Sunday 01 June 2025 23:12:15 +0000 (0:00:02.342) 0:02:17.272 *********** 2025-06-01 23:12:18.302775 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:12:18.302783 | orchestrator | 2025-06-01 23:12:18.302791 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:12:18.302799 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-01 23:12:18.302808 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-01 23:12:18.302815 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-01 23:12:18.302823 | orchestrator | 2025-06-01 23:12:18.302846 | orchestrator | 2025-06-01 23:12:18.302854 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:12:18.302862 | orchestrator | Sunday 01 June 2025 23:12:15 +0000 (0:00:00.279) 0:02:17.552 *********** 2025-06-01 23:12:18.302875 | orchestrator | =============================================================================== 2025-06-01 23:12:18.302887 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.53s 2025-06-01 23:12:18.302895 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.20s 2025-06-01 23:12:18.302903 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 33.68s 2025-06-01 23:12:18.302911 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.43s 2025-06-01 23:12:18.302919 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.34s 2025-06-01 23:12:18.302926 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.16s 2025-06-01 23:12:18.302934 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.09s 2025-06-01 23:12:18.302942 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.73s 2025-06-01 23:12:18.302950 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.34s 2025-06-01 23:12:18.302958 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.28s 2025-06-01 23:12:18.302966 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.26s 2025-06-01 23:12:18.302973 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.25s 2025-06-01 23:12:18.302981 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.23s 2025-06-01 23:12:18.302989 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.99s 2025-06-01 23:12:18.302997 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.87s 2025-06-01 23:12:18.303005 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.84s 2025-06-01 23:12:18.303013 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.84s 2025-06-01 23:12:18.303020 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.78s 2025-06-01 23:12:18.303028 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.75s 2025-06-01 23:12:18.303036 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2025-06-01 23:12:18.303044 | orchestrator | 2025-06-01 23:12:18 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:18.303052 | orchestrator | 2025-06-01 23:12:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:21.345805 | orchestrator | 2025-06-01 23:12:21 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:21.345932 | orchestrator | 2025-06-01 23:12:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:24.390558 | orchestrator | 2025-06-01 23:12:24 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:24.390683 | orchestrator | 2025-06-01 23:12:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:27.434450 | orchestrator | 2025-06-01 23:12:27 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:27.434562 | orchestrator | 2025-06-01 23:12:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:30.478825 | orchestrator | 2025-06-01 23:12:30 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:30.478968 | orchestrator | 2025-06-01 23:12:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:33.527586 | orchestrator | 2025-06-01 23:12:33 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:33.527661 | orchestrator | 2025-06-01 23:12:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:36.585174 | orchestrator | 2025-06-01 23:12:36 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:36.585307 | orchestrator | 2025-06-01 23:12:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:39.630944 | orchestrator | 2025-06-01 23:12:39 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:39.631065 | orchestrator | 2025-06-01 23:12:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:42.675394 | orchestrator | 2025-06-01 23:12:42 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:42.675494 | orchestrator | 2025-06-01 23:12:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:45.718899 | orchestrator | 2025-06-01 23:12:45 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:45.719008 | orchestrator | 2025-06-01 23:12:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:48.771495 | orchestrator | 2025-06-01 23:12:48 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:48.771596 | orchestrator | 2025-06-01 23:12:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:51.822681 | orchestrator | 2025-06-01 23:12:51 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:51.822785 | orchestrator | 2025-06-01 23:12:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:54.875994 | orchestrator | 2025-06-01 23:12:54 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:54.876107 | orchestrator | 2025-06-01 23:12:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:12:57.918297 | orchestrator | 2025-06-01 23:12:57 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:12:57.918405 | orchestrator | 2025-06-01 23:12:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:00.958263 | orchestrator | 2025-06-01 23:13:00 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:00.958382 | orchestrator | 2025-06-01 23:13:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:04.009777 | orchestrator | 2025-06-01 23:13:04 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:04.010232 | orchestrator | 2025-06-01 23:13:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:07.064573 | orchestrator | 2025-06-01 23:13:07 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:07.064702 | orchestrator | 2025-06-01 23:13:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:10.114669 | orchestrator | 2025-06-01 23:13:10 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:10.114801 | orchestrator | 2025-06-01 23:13:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:13.156997 | orchestrator | 2025-06-01 23:13:13 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:13.157124 | orchestrator | 2025-06-01 23:13:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:16.206079 | orchestrator | 2025-06-01 23:13:16 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:16.206224 | orchestrator | 2025-06-01 23:13:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:19.257734 | orchestrator | 2025-06-01 23:13:19 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:19.257920 | orchestrator | 2025-06-01 23:13:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:22.308694 | orchestrator | 2025-06-01 23:13:22 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:22.308893 | orchestrator | 2025-06-01 23:13:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:25.359831 | orchestrator | 2025-06-01 23:13:25 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:25.360420 | orchestrator | 2025-06-01 23:13:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:28.403359 | orchestrator | 2025-06-01 23:13:28 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:28.403492 | orchestrator | 2025-06-01 23:13:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:31.477988 | orchestrator | 2025-06-01 23:13:31 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:31.478146 | orchestrator | 2025-06-01 23:13:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:34.519654 | orchestrator | 2025-06-01 23:13:34 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:34.519775 | orchestrator | 2025-06-01 23:13:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:37.588484 | orchestrator | 2025-06-01 23:13:37 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:37.588592 | orchestrator | 2025-06-01 23:13:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:40.636271 | orchestrator | 2025-06-01 23:13:40 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:40.636377 | orchestrator | 2025-06-01 23:13:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:43.690970 | orchestrator | 2025-06-01 23:13:43 | INFO  | Task 2e79fb66-c163-4509-9d2d-4bc2c5e47417 is in state STARTED 2025-06-01 23:13:43.691602 | orchestrator | 2025-06-01 23:13:43 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:43.691640 | orchestrator | 2025-06-01 23:13:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:46.747572 | orchestrator | 2025-06-01 23:13:46 | INFO  | Task 2e79fb66-c163-4509-9d2d-4bc2c5e47417 is in state STARTED 2025-06-01 23:13:46.748172 | orchestrator | 2025-06-01 23:13:46 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:46.748207 | orchestrator | 2025-06-01 23:13:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:49.799092 | orchestrator | 2025-06-01 23:13:49 | INFO  | Task 2e79fb66-c163-4509-9d2d-4bc2c5e47417 is in state STARTED 2025-06-01 23:13:49.799401 | orchestrator | 2025-06-01 23:13:49 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:49.799434 | orchestrator | 2025-06-01 23:13:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:52.849268 | orchestrator | 2025-06-01 23:13:52 | INFO  | Task 2e79fb66-c163-4509-9d2d-4bc2c5e47417 is in state STARTED 2025-06-01 23:13:52.850645 | orchestrator | 2025-06-01 23:13:52 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:52.850958 | orchestrator | 2025-06-01 23:13:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:55.918331 | orchestrator | 2025-06-01 23:13:55 | INFO  | Task 2e79fb66-c163-4509-9d2d-4bc2c5e47417 is in state STARTED 2025-06-01 23:13:55.919227 | orchestrator | 2025-06-01 23:13:55 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:55.919262 | orchestrator | 2025-06-01 23:13:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:13:58.971782 | orchestrator | 2025-06-01 23:13:58 | INFO  | Task 2e79fb66-c163-4509-9d2d-4bc2c5e47417 is in state STARTED 2025-06-01 23:13:58.972339 | orchestrator | 2025-06-01 23:13:58 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:13:58.972373 | orchestrator | 2025-06-01 23:13:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:02.040302 | orchestrator | 2025-06-01 23:14:02 | INFO  | Task 2e79fb66-c163-4509-9d2d-4bc2c5e47417 is in state STARTED 2025-06-01 23:14:02.040416 | orchestrator | 2025-06-01 23:14:02 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:02.040430 | orchestrator | 2025-06-01 23:14:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:05.095464 | orchestrator | 2025-06-01 23:14:05 | INFO  | Task 2e79fb66-c163-4509-9d2d-4bc2c5e47417 is in state SUCCESS 2025-06-01 23:14:05.096319 | orchestrator | 2025-06-01 23:14:05 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:05.096352 | orchestrator | 2025-06-01 23:14:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:08.140636 | orchestrator | 2025-06-01 23:14:08 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:08.140760 | orchestrator | 2025-06-01 23:14:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:11.192378 | orchestrator | 2025-06-01 23:14:11 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:11.192506 | orchestrator | 2025-06-01 23:14:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:14.243352 | orchestrator | 2025-06-01 23:14:14 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:14.243472 | orchestrator | 2025-06-01 23:14:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:17.284945 | orchestrator | 2025-06-01 23:14:17 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:17.285085 | orchestrator | 2025-06-01 23:14:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:20.331330 | orchestrator | 2025-06-01 23:14:20 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:20.331463 | orchestrator | 2025-06-01 23:14:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:23.382536 | orchestrator | 2025-06-01 23:14:23 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:23.382665 | orchestrator | 2025-06-01 23:14:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:26.432742 | orchestrator | 2025-06-01 23:14:26 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:26.432929 | orchestrator | 2025-06-01 23:14:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:29.472734 | orchestrator | 2025-06-01 23:14:29 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:29.472929 | orchestrator | 2025-06-01 23:14:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:32.520282 | orchestrator | 2025-06-01 23:14:32 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:32.520439 | orchestrator | 2025-06-01 23:14:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:35.566285 | orchestrator | 2025-06-01 23:14:35 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:35.566419 | orchestrator | 2025-06-01 23:14:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:38.609003 | orchestrator | 2025-06-01 23:14:38 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:38.609122 | orchestrator | 2025-06-01 23:14:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:41.652580 | orchestrator | 2025-06-01 23:14:41 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:41.652698 | orchestrator | 2025-06-01 23:14:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:44.709201 | orchestrator | 2025-06-01 23:14:44 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:44.709330 | orchestrator | 2025-06-01 23:14:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:47.754440 | orchestrator | 2025-06-01 23:14:47 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:47.754530 | orchestrator | 2025-06-01 23:14:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:50.809359 | orchestrator | 2025-06-01 23:14:50 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:50.809458 | orchestrator | 2025-06-01 23:14:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:53.851131 | orchestrator | 2025-06-01 23:14:53 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:53.851503 | orchestrator | 2025-06-01 23:14:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:56.907585 | orchestrator | 2025-06-01 23:14:56 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:56.907688 | orchestrator | 2025-06-01 23:14:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:14:59.947070 | orchestrator | 2025-06-01 23:14:59 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:14:59.947181 | orchestrator | 2025-06-01 23:14:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:15:02.987917 | orchestrator | 2025-06-01 23:15:02 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:15:02.988023 | orchestrator | 2025-06-01 23:15:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:15:06.045032 | orchestrator | 2025-06-01 23:15:06 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:15:06.045144 | orchestrator | 2025-06-01 23:15:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:15:09.094251 | orchestrator | 2025-06-01 23:15:09 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:15:09.094334 | orchestrator | 2025-06-01 23:15:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:15:12.135688 | orchestrator | 2025-06-01 23:15:12 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:15:12.135811 | orchestrator | 2025-06-01 23:15:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:15:15.180726 | orchestrator | 2025-06-01 23:15:15 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:15:15.180827 | orchestrator | 2025-06-01 23:15:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:15:18.229725 | orchestrator | 2025-06-01 23:15:18 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:15:18.229833 | orchestrator | 2025-06-01 23:15:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:15:21.281569 | orchestrator | 2025-06-01 23:15:21 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:15:21.281676 | orchestrator | 2025-06-01 23:15:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:15:24.333838 | orchestrator | 2025-06-01 23:15:24 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:15:24.334230 | orchestrator | 2025-06-01 23:15:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:15:27.383246 | orchestrator | 2025-06-01 23:15:27 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:15:27.383369 | orchestrator | 2025-06-01 23:15:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:15:30.442318 | orchestrator | 2025-06-01 23:15:30 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state STARTED 2025-06-01 23:15:30.442420 | orchestrator | 2025-06-01 23:15:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:15:33.491512 | orchestrator | 2025-06-01 23:15:33 | INFO  | Task 0b415083-40cd-43c4-b9d4-faaa7096838e is in state SUCCESS 2025-06-01 23:15:33.493586 | orchestrator | 2025-06-01 23:15:33.493711 | orchestrator | None 2025-06-01 23:15:33.493730 | orchestrator | 2025-06-01 23:15:33.493743 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:15:33.493755 | orchestrator | 2025-06-01 23:15:33.493766 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:15:33.493778 | orchestrator | Sunday 01 June 2025 23:10:53 +0000 (0:00:00.201) 0:00:00.201 *********** 2025-06-01 23:15:33.493789 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:15:33.493801 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:15:33.493812 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:15:33.493823 | orchestrator | 2025-06-01 23:15:33.493834 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:15:33.493845 | orchestrator | Sunday 01 June 2025 23:10:54 +0000 (0:00:00.217) 0:00:00.418 *********** 2025-06-01 23:15:33.493856 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-01 23:15:33.493867 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-01 23:15:33.493937 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-01 23:15:33.493956 | orchestrator | 2025-06-01 23:15:33.493968 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-01 23:15:33.493979 | orchestrator | 2025-06-01 23:15:33.493990 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-01 23:15:33.494001 | orchestrator | Sunday 01 June 2025 23:10:54 +0000 (0:00:00.346) 0:00:00.764 *********** 2025-06-01 23:15:33.494012 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:15:33.494616 | orchestrator | 2025-06-01 23:15:33.494629 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-01 23:15:33.494641 | orchestrator | Sunday 01 June 2025 23:10:55 +0000 (0:00:00.483) 0:00:01.247 *********** 2025-06-01 23:15:33.494653 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-01 23:15:33.494664 | orchestrator | 2025-06-01 23:15:33.494675 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-01 23:15:33.494686 | orchestrator | Sunday 01 June 2025 23:10:58 +0000 (0:00:03.346) 0:00:04.594 *********** 2025-06-01 23:15:33.494697 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-01 23:15:33.494708 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-01 23:15:33.494719 | orchestrator | 2025-06-01 23:15:33.494735 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-01 23:15:33.494747 | orchestrator | Sunday 01 June 2025 23:11:04 +0000 (0:00:06.236) 0:00:10.831 *********** 2025-06-01 23:15:33.494758 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 23:15:33.494769 | orchestrator | 2025-06-01 23:15:33.494781 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-01 23:15:33.494808 | orchestrator | Sunday 01 June 2025 23:11:07 +0000 (0:00:03.193) 0:00:14.024 *********** 2025-06-01 23:15:33.494820 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 23:15:33.494831 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-01 23:15:33.494864 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-01 23:15:33.494906 | orchestrator | 2025-06-01 23:15:33.494918 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-01 23:15:33.494928 | orchestrator | Sunday 01 June 2025 23:11:16 +0000 (0:00:08.410) 0:00:22.435 *********** 2025-06-01 23:15:33.494939 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 23:15:33.494950 | orchestrator | 2025-06-01 23:15:33.494961 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-01 23:15:33.494972 | orchestrator | Sunday 01 June 2025 23:11:19 +0000 (0:00:03.333) 0:00:25.768 *********** 2025-06-01 23:15:33.494983 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-01 23:15:33.494993 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-01 23:15:33.495004 | orchestrator | 2025-06-01 23:15:33.495014 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-01 23:15:33.495025 | orchestrator | Sunday 01 June 2025 23:11:26 +0000 (0:00:07.201) 0:00:32.969 *********** 2025-06-01 23:15:33.495035 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-01 23:15:33.495046 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-01 23:15:33.495057 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-01 23:15:33.495067 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-01 23:15:33.495078 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-01 23:15:33.495088 | orchestrator | 2025-06-01 23:15:33.495099 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-01 23:15:33.495109 | orchestrator | Sunday 01 June 2025 23:11:41 +0000 (0:00:15.093) 0:00:48.063 *********** 2025-06-01 23:15:33.495120 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:15:33.495130 | orchestrator | 2025-06-01 23:15:33.495141 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-01 23:15:33.495152 | orchestrator | Sunday 01 June 2025 23:11:42 +0000 (0:00:00.576) 0:00:48.640 *********** 2025-06-01 23:15:33.495163 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.495173 | orchestrator | 2025-06-01 23:15:33.495184 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-06-01 23:15:33.495195 | orchestrator | Sunday 01 June 2025 23:11:47 +0000 (0:00:04.799) 0:00:53.439 *********** 2025-06-01 23:15:33.495206 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.495216 | orchestrator | 2025-06-01 23:15:33.495227 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-01 23:15:33.495252 | orchestrator | Sunday 01 June 2025 23:11:51 +0000 (0:00:04.060) 0:00:57.500 *********** 2025-06-01 23:15:33.495264 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:15:33.495275 | orchestrator | 2025-06-01 23:15:33.495286 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-06-01 23:15:33.495297 | orchestrator | Sunday 01 June 2025 23:11:54 +0000 (0:00:03.146) 0:01:00.646 *********** 2025-06-01 23:15:33.495308 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-01 23:15:33.495319 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-01 23:15:33.495329 | orchestrator | 2025-06-01 23:15:33.495340 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-06-01 23:15:33.495351 | orchestrator | Sunday 01 June 2025 23:12:03 +0000 (0:00:09.301) 0:01:09.948 *********** 2025-06-01 23:15:33.495362 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-06-01 23:15:33.495373 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-06-01 23:15:33.495386 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-06-01 23:15:33.495408 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-06-01 23:15:33.495419 | orchestrator | 2025-06-01 23:15:33.495430 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-06-01 23:15:33.495441 | orchestrator | Sunday 01 June 2025 23:12:19 +0000 (0:00:15.637) 0:01:25.586 *********** 2025-06-01 23:15:33.495452 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.495462 | orchestrator | 2025-06-01 23:15:33.495473 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-06-01 23:15:33.495484 | orchestrator | Sunday 01 June 2025 23:12:23 +0000 (0:00:04.374) 0:01:29.961 *********** 2025-06-01 23:15:33.495495 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.495506 | orchestrator | 2025-06-01 23:15:33.495516 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-06-01 23:15:33.495527 | orchestrator | Sunday 01 June 2025 23:12:28 +0000 (0:00:05.108) 0:01:35.069 *********** 2025-06-01 23:15:33.495538 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:15:33.495549 | orchestrator | 2025-06-01 23:15:33.495559 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-06-01 23:15:33.495570 | orchestrator | Sunday 01 June 2025 23:12:29 +0000 (0:00:00.231) 0:01:35.300 *********** 2025-06-01 23:15:33.495581 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.495592 | orchestrator | 2025-06-01 23:15:33.495603 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-01 23:15:33.495620 | orchestrator | Sunday 01 June 2025 23:12:34 +0000 (0:00:05.216) 0:01:40.517 *********** 2025-06-01 23:15:33.495631 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:15:33.495642 | orchestrator | 2025-06-01 23:15:33.495653 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-06-01 23:15:33.495664 | orchestrator | Sunday 01 June 2025 23:12:35 +0000 (0:00:01.387) 0:01:41.904 *********** 2025-06-01 23:15:33.495674 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.495685 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.495696 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.495707 | orchestrator | 2025-06-01 23:15:33.495718 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-06-01 23:15:33.495729 | orchestrator | Sunday 01 June 2025 23:12:40 +0000 (0:00:05.222) 0:01:47.127 *********** 2025-06-01 23:15:33.495739 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.495750 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.495761 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.495772 | orchestrator | 2025-06-01 23:15:33.495782 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-06-01 23:15:33.495793 | orchestrator | Sunday 01 June 2025 23:12:45 +0000 (0:00:04.351) 0:01:51.478 *********** 2025-06-01 23:15:33.495804 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.495815 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.495826 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.495836 | orchestrator | 2025-06-01 23:15:33.495847 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-06-01 23:15:33.495858 | orchestrator | Sunday 01 June 2025 23:12:46 +0000 (0:00:00.782) 0:01:52.261 *********** 2025-06-01 23:15:33.495903 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:15:33.495918 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:15:33.495929 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:15:33.495940 | orchestrator | 2025-06-01 23:15:33.495950 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-06-01 23:15:33.495961 | orchestrator | Sunday 01 June 2025 23:12:48 +0000 (0:00:02.007) 0:01:54.269 *********** 2025-06-01 23:15:33.495972 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.495983 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.496001 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.496012 | orchestrator | 2025-06-01 23:15:33.496022 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-06-01 23:15:33.496033 | orchestrator | Sunday 01 June 2025 23:12:49 +0000 (0:00:01.314) 0:01:55.583 *********** 2025-06-01 23:15:33.496044 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.496054 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.496065 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.496076 | orchestrator | 2025-06-01 23:15:33.496086 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-06-01 23:15:33.496097 | orchestrator | Sunday 01 June 2025 23:12:50 +0000 (0:00:01.192) 0:01:56.776 *********** 2025-06-01 23:15:33.496108 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.496119 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.496129 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.496140 | orchestrator | 2025-06-01 23:15:33.496161 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-06-01 23:15:33.496172 | orchestrator | Sunday 01 June 2025 23:12:52 +0000 (0:00:01.927) 0:01:58.703 *********** 2025-06-01 23:15:33.496183 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.496194 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.496205 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.496216 | orchestrator | 2025-06-01 23:15:33.496227 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-06-01 23:15:33.496237 | orchestrator | Sunday 01 June 2025 23:12:54 +0000 (0:00:01.797) 0:02:00.501 *********** 2025-06-01 23:15:33.496248 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:15:33.496259 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:15:33.496270 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:15:33.496280 | orchestrator | 2025-06-01 23:15:33.496291 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-06-01 23:15:33.496302 | orchestrator | Sunday 01 June 2025 23:12:54 +0000 (0:00:00.621) 0:02:01.122 *********** 2025-06-01 23:15:33.496313 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:15:33.496324 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:15:33.496334 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:15:33.496345 | orchestrator | 2025-06-01 23:15:33.496356 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-01 23:15:33.496367 | orchestrator | Sunday 01 June 2025 23:12:57 +0000 (0:00:02.827) 0:02:03.949 *********** 2025-06-01 23:15:33.496378 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:15:33.496388 | orchestrator | 2025-06-01 23:15:33.496399 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-06-01 23:15:33.496410 | orchestrator | Sunday 01 June 2025 23:12:58 +0000 (0:00:00.756) 0:02:04.706 *********** 2025-06-01 23:15:33.496421 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:15:33.496431 | orchestrator | 2025-06-01 23:15:33.496442 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-01 23:15:33.496453 | orchestrator | Sunday 01 June 2025 23:13:01 +0000 (0:00:03.388) 0:02:08.094 *********** 2025-06-01 23:15:33.496464 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:15:33.496474 | orchestrator | 2025-06-01 23:15:33.496485 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-06-01 23:15:33.496496 | orchestrator | Sunday 01 June 2025 23:13:04 +0000 (0:00:03.045) 0:02:11.140 *********** 2025-06-01 23:15:33.496507 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-01 23:15:33.496517 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-01 23:15:33.496528 | orchestrator | 2025-06-01 23:15:33.496539 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-06-01 23:15:33.496550 | orchestrator | Sunday 01 June 2025 23:13:12 +0000 (0:00:07.303) 0:02:18.444 *********** 2025-06-01 23:15:33.496561 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:15:33.496571 | orchestrator | 2025-06-01 23:15:33.496582 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-06-01 23:15:33.496605 | orchestrator | Sunday 01 June 2025 23:13:15 +0000 (0:00:03.388) 0:02:21.832 *********** 2025-06-01 23:15:33.496616 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:15:33.496627 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:15:33.496638 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:15:33.496649 | orchestrator | 2025-06-01 23:15:33.496659 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-06-01 23:15:33.496670 | orchestrator | Sunday 01 June 2025 23:13:15 +0000 (0:00:00.348) 0:02:22.180 *********** 2025-06-01 23:15:33.496684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.496706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.496720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.496732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.496749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.496768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.496780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.496791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.496810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.496823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.496834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.496857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.496868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.496953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.496972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.496984 | orchestrator | 2025-06-01 23:15:33.496996 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-06-01 23:15:33.497007 | orchestrator | Sunday 01 June 2025 23:13:18 +0000 (0:00:02.737) 0:02:24.918 *********** 2025-06-01 23:15:33.497017 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:15:33.497028 | orchestrator | 2025-06-01 23:15:33.497039 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-06-01 23:15:33.497050 | orchestrator | Sunday 01 June 2025 23:13:19 +0000 (0:00:00.387) 0:02:25.305 *********** 2025-06-01 23:15:33.497061 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:15:33.497072 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:15:33.497083 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:15:33.497093 | orchestrator | 2025-06-01 23:15:33.497104 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-06-01 23:15:33.497115 | orchestrator | Sunday 01 June 2025 23:13:19 +0000 (0:00:00.325) 0:02:25.631 *********** 2025-06-01 23:15:33.497126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 23:15:33.497153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:15:33.497165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.497176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.497188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:15:33.497199 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:15:33.497218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 23:15:33.497238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:15:33.497259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.497270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.497280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:15:33.497290 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:15:33.497307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 23:15:33.497317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:15:33.497333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.497343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.497358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:15:33.497368 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:15:33.497378 | orchestrator | 2025-06-01 23:15:33.497388 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-01 23:15:33.497398 | orchestrator | Sunday 01 June 2025 23:13:20 +0000 (0:00:00.717) 0:02:26.348 *********** 2025-06-01 23:15:33.497408 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:15:33.497417 | orchestrator | 2025-06-01 23:15:33.497427 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-06-01 23:15:33.497437 | orchestrator | Sunday 01 June 2025 23:13:20 +0000 (0:00:00.548) 0:02:26.896 *********** 2025-06-01 23:15:33.497447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.497464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.497480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.497495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.497505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.497515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.497525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.497540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.497556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.497566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.497581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.497591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.497601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.497617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.497633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.497643 | orchestrator | 2025-06-01 23:15:33.497653 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-06-01 23:15:33.497663 | orchestrator | Sunday 01 June 2025 23:13:25 +0000 (0:00:05.188) 0:02:32.085 *********** 2025-06-01 23:15:33.497673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 23:15:33.497687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:15:33.497698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.497708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.497728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:15:33.497738 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:15:33.497749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 23:15:33.497759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:15:33.497773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.497784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.497794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:15:33.497809 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:15:33.497826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 23:15:33.497836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:15:33.497846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.497861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.497891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:15:33.497910 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:15:33.497926 | orchestrator | 2025-06-01 23:15:33.497942 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-06-01 23:15:33.497952 | orchestrator | Sunday 01 June 2025 23:13:26 +0000 (0:00:00.700) 0:02:32.786 *********** 2025-06-01 23:15:33.497963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 23:15:33.497986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:15:33.497997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.498007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.498066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:15:33.498084 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:15:33.498101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 23:15:33.498133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:15:33.498155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.498166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.498176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:15:33.498185 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:15:33.498201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 23:15:33.498211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:15:33.498227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.498245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:15:33.498255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:15:33.498265 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:15:33.498275 | orchestrator | 2025-06-01 23:15:33.498285 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-06-01 23:15:33.498294 | orchestrator | Sunday 01 June 2025 23:13:27 +0000 (0:00:00.901) 0:02:33.688 *********** 2025-06-01 23:15:33.498305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.498319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.498335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.498664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.498684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.498694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.498705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.498721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.498739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.498749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.498766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.498777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.498787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.498801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.498818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.498828 | orchestrator | 2025-06-01 23:15:33.498838 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-06-01 23:15:33.498848 | orchestrator | Sunday 01 June 2025 23:13:32 +0000 (0:00:05.426) 0:02:39.114 *********** 2025-06-01 23:15:33.498858 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-01 23:15:33.498868 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-01 23:15:33.498947 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-01 23:15:33.498957 | orchestrator | 2025-06-01 23:15:33.498967 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-06-01 23:15:33.498976 | orchestrator | Sunday 01 June 2025 23:13:34 +0000 (0:00:01.624) 0:02:40.739 *********** 2025-06-01 23:15:33.498993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.499004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.499020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.499037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.499048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.499058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.499073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499184 | orchestrator | 2025-06-01 23:15:33.499193 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-06-01 23:15:33.499203 | orchestrator | Sunday 01 June 2025 23:13:52 +0000 (0:00:17.494) 0:02:58.233 *********** 2025-06-01 23:15:33.499219 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.499229 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.499239 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.499249 | orchestrator | 2025-06-01 23:15:33.499259 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-06-01 23:15:33.499268 | orchestrator | Sunday 01 June 2025 23:13:53 +0000 (0:00:01.590) 0:02:59.823 *********** 2025-06-01 23:15:33.499278 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-01 23:15:33.499287 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-01 23:15:33.499297 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-01 23:15:33.499306 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-01 23:15:33.499316 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-01 23:15:33.499326 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-01 23:15:33.499339 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-01 23:15:33.499349 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-01 23:15:33.499359 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-01 23:15:33.499369 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-01 23:15:33.499378 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-01 23:15:33.499388 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-01 23:15:33.499397 | orchestrator | 2025-06-01 23:15:33.499407 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-06-01 23:15:33.499417 | orchestrator | Sunday 01 June 2025 23:13:59 +0000 (0:00:05.524) 0:03:05.348 *********** 2025-06-01 23:15:33.499426 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-01 23:15:33.499436 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-01 23:15:33.499443 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-01 23:15:33.499451 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-01 23:15:33.499459 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-01 23:15:33.499467 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-01 23:15:33.499474 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-01 23:15:33.499482 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-01 23:15:33.499490 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-01 23:15:33.499498 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-01 23:15:33.499506 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-01 23:15:33.499513 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-01 23:15:33.499521 | orchestrator | 2025-06-01 23:15:33.499529 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-06-01 23:15:33.499537 | orchestrator | Sunday 01 June 2025 23:14:04 +0000 (0:00:05.258) 0:03:10.606 *********** 2025-06-01 23:15:33.499544 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-01 23:15:33.499552 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-01 23:15:33.499560 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-01 23:15:33.499568 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-01 23:15:33.499634 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-01 23:15:33.499644 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-01 23:15:33.499652 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-01 23:15:33.499665 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-01 23:15:33.499673 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-01 23:15:33.499689 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-01 23:15:33.499697 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-01 23:15:33.499705 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-01 23:15:33.499713 | orchestrator | 2025-06-01 23:15:33.499721 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-06-01 23:15:33.499729 | orchestrator | Sunday 01 June 2025 23:14:09 +0000 (0:00:05.210) 0:03:15.817 *********** 2025-06-01 23:15:33.499737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.499750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.499759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:15:33.499768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.499781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.499795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-01 23:15:33.499803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-01 23:15:33.499914 | orchestrator | 2025-06-01 23:15:33.499922 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-01 23:15:33.499930 | orchestrator | Sunday 01 June 2025 23:14:13 +0000 (0:00:03.486) 0:03:19.304 *********** 2025-06-01 23:15:33.499938 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:15:33.499946 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:15:33.499954 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:15:33.499962 | orchestrator | 2025-06-01 23:15:33.499970 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-06-01 23:15:33.499977 | orchestrator | Sunday 01 June 2025 23:14:13 +0000 (0:00:00.353) 0:03:19.657 *********** 2025-06-01 23:15:33.499985 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.499993 | orchestrator | 2025-06-01 23:15:33.500001 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-06-01 23:15:33.500008 | orchestrator | Sunday 01 June 2025 23:14:15 +0000 (0:00:01.976) 0:03:21.634 *********** 2025-06-01 23:15:33.500016 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.500024 | orchestrator | 2025-06-01 23:15:33.500032 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-06-01 23:15:33.500040 | orchestrator | Sunday 01 June 2025 23:14:18 +0000 (0:00:02.729) 0:03:24.363 *********** 2025-06-01 23:15:33.500052 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.500060 | orchestrator | 2025-06-01 23:15:33.500068 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-06-01 23:15:33.500076 | orchestrator | Sunday 01 June 2025 23:14:20 +0000 (0:00:02.147) 0:03:26.511 *********** 2025-06-01 23:15:33.500084 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.500092 | orchestrator | 2025-06-01 23:15:33.500099 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-06-01 23:15:33.500107 | orchestrator | Sunday 01 June 2025 23:14:22 +0000 (0:00:02.128) 0:03:28.639 *********** 2025-06-01 23:15:33.500115 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.500123 | orchestrator | 2025-06-01 23:15:33.500131 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-01 23:15:33.500138 | orchestrator | Sunday 01 June 2025 23:14:41 +0000 (0:00:19.257) 0:03:47.897 *********** 2025-06-01 23:15:33.500146 | orchestrator | 2025-06-01 23:15:33.500154 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-01 23:15:33.500162 | orchestrator | Sunday 01 June 2025 23:14:41 +0000 (0:00:00.071) 0:03:47.968 *********** 2025-06-01 23:15:33.500170 | orchestrator | 2025-06-01 23:15:33.500177 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-01 23:15:33.500185 | orchestrator | Sunday 01 June 2025 23:14:41 +0000 (0:00:00.067) 0:03:48.036 *********** 2025-06-01 23:15:33.500193 | orchestrator | 2025-06-01 23:15:33.500201 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-06-01 23:15:33.500213 | orchestrator | Sunday 01 June 2025 23:14:41 +0000 (0:00:00.073) 0:03:48.110 *********** 2025-06-01 23:15:33.500221 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.500229 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.500237 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.500245 | orchestrator | 2025-06-01 23:15:33.500253 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-06-01 23:15:33.500261 | orchestrator | Sunday 01 June 2025 23:14:53 +0000 (0:00:11.430) 0:03:59.540 *********** 2025-06-01 23:15:33.500269 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.500277 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.500284 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.500292 | orchestrator | 2025-06-01 23:15:33.500300 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-06-01 23:15:33.500308 | orchestrator | Sunday 01 June 2025 23:15:04 +0000 (0:00:11.620) 0:04:11.161 *********** 2025-06-01 23:15:33.500316 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.500323 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.500331 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.500339 | orchestrator | 2025-06-01 23:15:33.500347 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-06-01 23:15:33.500355 | orchestrator | Sunday 01 June 2025 23:15:15 +0000 (0:00:10.590) 0:04:21.751 *********** 2025-06-01 23:15:33.500362 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.500370 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.500378 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.500386 | orchestrator | 2025-06-01 23:15:33.500394 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-06-01 23:15:33.500401 | orchestrator | Sunday 01 June 2025 23:15:26 +0000 (0:00:10.700) 0:04:32.452 *********** 2025-06-01 23:15:33.500409 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:15:33.500417 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:15:33.500425 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:15:33.500432 | orchestrator | 2025-06-01 23:15:33.500440 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:15:33.500449 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-01 23:15:33.500457 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 23:15:33.500470 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 23:15:33.500478 | orchestrator | 2025-06-01 23:15:33.500486 | orchestrator | 2025-06-01 23:15:33.500494 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:15:33.500502 | orchestrator | Sunday 01 June 2025 23:15:31 +0000 (0:00:05.361) 0:04:37.813 *********** 2025-06-01 23:15:33.500513 | orchestrator | =============================================================================== 2025-06-01 23:15:33.500521 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.26s 2025-06-01 23:15:33.500529 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.49s 2025-06-01 23:15:33.500537 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.64s 2025-06-01 23:15:33.500545 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.09s 2025-06-01 23:15:33.500553 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.62s 2025-06-01 23:15:33.500560 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.43s 2025-06-01 23:15:33.500568 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.70s 2025-06-01 23:15:33.500576 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.59s 2025-06-01 23:15:33.500584 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.30s 2025-06-01 23:15:33.500591 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.41s 2025-06-01 23:15:33.500599 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.30s 2025-06-01 23:15:33.500607 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.20s 2025-06-01 23:15:33.500615 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.24s 2025-06-01 23:15:33.500623 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.52s 2025-06-01 23:15:33.500630 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.43s 2025-06-01 23:15:33.500638 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.36s 2025-06-01 23:15:33.500646 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.26s 2025-06-01 23:15:33.500654 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.22s 2025-06-01 23:15:33.500662 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.22s 2025-06-01 23:15:33.500669 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.21s 2025-06-01 23:15:33.500677 | orchestrator | 2025-06-01 23:15:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:15:36.539835 | orchestrator | 2025-06-01 23:15:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:15:39.586605 | orchestrator | 2025-06-01 23:15:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:15:42.639326 | orchestrator | 2025-06-01 23:15:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:15:45.685920 | orchestrator | 2025-06-01 23:15:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:15:48.729122 | orchestrator | 2025-06-01 23:15:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:15:51.779762 | orchestrator | 2025-06-01 23:15:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:15:54.824841 | orchestrator | 2025-06-01 23:15:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:15:57.871900 | orchestrator | 2025-06-01 23:15:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:16:00.916824 | orchestrator | 2025-06-01 23:16:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:16:03.970604 | orchestrator | 2025-06-01 23:16:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:16:07.018561 | orchestrator | 2025-06-01 23:16:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:16:10.064944 | orchestrator | 2025-06-01 23:16:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:16:13.112714 | orchestrator | 2025-06-01 23:16:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:16:16.158514 | orchestrator | 2025-06-01 23:16:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:16:19.207737 | orchestrator | 2025-06-01 23:16:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:16:22.258190 | orchestrator | 2025-06-01 23:16:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:16:25.307383 | orchestrator | 2025-06-01 23:16:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:16:28.356064 | orchestrator | 2025-06-01 23:16:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:16:31.399335 | orchestrator | 2025-06-01 23:16:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-01 23:16:34.443449 | orchestrator | 2025-06-01 23:16:34.760162 | orchestrator | 2025-06-01 23:16:34.764831 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Jun 1 23:16:34 UTC 2025 2025-06-01 23:16:34.764918 | orchestrator | 2025-06-01 23:16:35.108063 | orchestrator | ok: Runtime: 0:36:27.647822 2025-06-01 23:16:35.361929 | 2025-06-01 23:16:35.362080 | TASK [Bootstrap services] 2025-06-01 23:16:36.150379 | orchestrator | 2025-06-01 23:16:36.150559 | orchestrator | # BOOTSTRAP 2025-06-01 23:16:36.150582 | orchestrator | 2025-06-01 23:16:36.150597 | orchestrator | + set -e 2025-06-01 23:16:36.150610 | orchestrator | + echo 2025-06-01 23:16:36.150624 | orchestrator | + echo '# BOOTSTRAP' 2025-06-01 23:16:36.150642 | orchestrator | + echo 2025-06-01 23:16:36.150685 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-01 23:16:36.160467 | orchestrator | + set -e 2025-06-01 23:16:36.160499 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-01 23:16:38.331862 | orchestrator | 2025-06-01 23:16:38 | INFO  | It takes a moment until task b92e5bd1-4768-47ef-9200-fdfb83fea286 (flavor-manager) has been started and output is visible here. 2025-06-01 23:16:42.195456 | orchestrator | 2025-06-01 23:16:42 | INFO  | Flavor SCS-1V-4 created 2025-06-01 23:16:42.353110 | orchestrator | 2025-06-01 23:16:42 | INFO  | Flavor SCS-2V-8 created 2025-06-01 23:16:42.544625 | orchestrator | 2025-06-01 23:16:42 | INFO  | Flavor SCS-4V-16 created 2025-06-01 23:16:42.701726 | orchestrator | 2025-06-01 23:16:42 | INFO  | Flavor SCS-8V-32 created 2025-06-01 23:16:42.838100 | orchestrator | 2025-06-01 23:16:42 | INFO  | Flavor SCS-1V-2 created 2025-06-01 23:16:42.974810 | orchestrator | 2025-06-01 23:16:42 | INFO  | Flavor SCS-2V-4 created 2025-06-01 23:16:43.116061 | orchestrator | 2025-06-01 23:16:43 | INFO  | Flavor SCS-4V-8 created 2025-06-01 23:16:43.262367 | orchestrator | 2025-06-01 23:16:43 | INFO  | Flavor SCS-8V-16 created 2025-06-01 23:16:43.387587 | orchestrator | 2025-06-01 23:16:43 | INFO  | Flavor SCS-16V-32 created 2025-06-01 23:16:43.503250 | orchestrator | 2025-06-01 23:16:43 | INFO  | Flavor SCS-1V-8 created 2025-06-01 23:16:43.636575 | orchestrator | 2025-06-01 23:16:43 | INFO  | Flavor SCS-2V-16 created 2025-06-01 23:16:43.759789 | orchestrator | 2025-06-01 23:16:43 | INFO  | Flavor SCS-4V-32 created 2025-06-01 23:16:43.889299 | orchestrator | 2025-06-01 23:16:43 | INFO  | Flavor SCS-1L-1 created 2025-06-01 23:16:44.035732 | orchestrator | 2025-06-01 23:16:44 | INFO  | Flavor SCS-2V-4-20s created 2025-06-01 23:16:44.171157 | orchestrator | 2025-06-01 23:16:44 | INFO  | Flavor SCS-4V-16-100s created 2025-06-01 23:16:44.298509 | orchestrator | 2025-06-01 23:16:44 | INFO  | Flavor SCS-1V-4-10 created 2025-06-01 23:16:44.429483 | orchestrator | 2025-06-01 23:16:44 | INFO  | Flavor SCS-2V-8-20 created 2025-06-01 23:16:44.572835 | orchestrator | 2025-06-01 23:16:44 | INFO  | Flavor SCS-4V-16-50 created 2025-06-01 23:16:44.693098 | orchestrator | 2025-06-01 23:16:44 | INFO  | Flavor SCS-8V-32-100 created 2025-06-01 23:16:44.825910 | orchestrator | 2025-06-01 23:16:44 | INFO  | Flavor SCS-1V-2-5 created 2025-06-01 23:16:44.978691 | orchestrator | 2025-06-01 23:16:44 | INFO  | Flavor SCS-2V-4-10 created 2025-06-01 23:16:45.122013 | orchestrator | 2025-06-01 23:16:45 | INFO  | Flavor SCS-4V-8-20 created 2025-06-01 23:16:45.264979 | orchestrator | 2025-06-01 23:16:45 | INFO  | Flavor SCS-8V-16-50 created 2025-06-01 23:16:45.385373 | orchestrator | 2025-06-01 23:16:45 | INFO  | Flavor SCS-16V-32-100 created 2025-06-01 23:16:45.502838 | orchestrator | 2025-06-01 23:16:45 | INFO  | Flavor SCS-1V-8-20 created 2025-06-01 23:16:45.616441 | orchestrator | 2025-06-01 23:16:45 | INFO  | Flavor SCS-2V-16-50 created 2025-06-01 23:16:45.743520 | orchestrator | 2025-06-01 23:16:45 | INFO  | Flavor SCS-4V-32-100 created 2025-06-01 23:16:45.893999 | orchestrator | 2025-06-01 23:16:45 | INFO  | Flavor SCS-1L-1-5 created 2025-06-01 23:16:48.301334 | orchestrator | 2025-06-01 23:16:48 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-01 23:16:48.306748 | orchestrator | Registering Redlock._acquired_script 2025-06-01 23:16:48.306789 | orchestrator | Registering Redlock._extend_script 2025-06-01 23:16:48.306831 | orchestrator | Registering Redlock._release_script 2025-06-01 23:16:48.371377 | orchestrator | 2025-06-01 23:16:48 | INFO  | Task 8b7d3c3d-bd6c-4b1b-a6e2-7639ed20f8a1 (bootstrap-basic) was prepared for execution. 2025-06-01 23:16:48.371430 | orchestrator | 2025-06-01 23:16:48 | INFO  | It takes a moment until task 8b7d3c3d-bd6c-4b1b-a6e2-7639ed20f8a1 (bootstrap-basic) has been started and output is visible here. 2025-06-01 23:16:52.760299 | orchestrator | 2025-06-01 23:16:52.760683 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-01 23:16:52.761646 | orchestrator | 2025-06-01 23:16:52.762279 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 23:16:52.763313 | orchestrator | Sunday 01 June 2025 23:16:52 +0000 (0:00:00.090) 0:00:00.090 *********** 2025-06-01 23:16:54.731574 | orchestrator | ok: [localhost] 2025-06-01 23:16:54.731688 | orchestrator | 2025-06-01 23:16:54.732624 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-01 23:16:54.734094 | orchestrator | Sunday 01 June 2025 23:16:54 +0000 (0:00:01.978) 0:00:02.068 *********** 2025-06-01 23:17:04.794948 | orchestrator | ok: [localhost] 2025-06-01 23:17:04.795098 | orchestrator | 2025-06-01 23:17:04.796512 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-01 23:17:04.798080 | orchestrator | Sunday 01 June 2025 23:17:04 +0000 (0:00:10.060) 0:00:12.129 *********** 2025-06-01 23:17:11.946332 | orchestrator | changed: [localhost] 2025-06-01 23:17:11.946477 | orchestrator | 2025-06-01 23:17:11.946498 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-01 23:17:11.946513 | orchestrator | Sunday 01 June 2025 23:17:11 +0000 (0:00:07.153) 0:00:19.282 *********** 2025-06-01 23:17:18.613201 | orchestrator | ok: [localhost] 2025-06-01 23:17:18.613368 | orchestrator | 2025-06-01 23:17:18.613403 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-01 23:17:18.614323 | orchestrator | Sunday 01 June 2025 23:17:18 +0000 (0:00:06.667) 0:00:25.949 *********** 2025-06-01 23:17:26.419990 | orchestrator | changed: [localhost] 2025-06-01 23:17:26.420617 | orchestrator | 2025-06-01 23:17:26.420975 | orchestrator | TASK [Create public network] *************************************************** 2025-06-01 23:17:26.421115 | orchestrator | Sunday 01 June 2025 23:17:26 +0000 (0:00:07.805) 0:00:33.755 *********** 2025-06-01 23:17:31.639498 | orchestrator | changed: [localhost] 2025-06-01 23:17:31.639731 | orchestrator | 2025-06-01 23:17:31.640052 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-01 23:17:31.640420 | orchestrator | Sunday 01 June 2025 23:17:31 +0000 (0:00:05.220) 0:00:38.975 *********** 2025-06-01 23:17:38.209439 | orchestrator | changed: [localhost] 2025-06-01 23:17:38.209565 | orchestrator | 2025-06-01 23:17:38.213046 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-01 23:17:38.213099 | orchestrator | Sunday 01 June 2025 23:17:38 +0000 (0:00:06.569) 0:00:45.545 *********** 2025-06-01 23:17:42.484983 | orchestrator | changed: [localhost] 2025-06-01 23:17:42.485471 | orchestrator | 2025-06-01 23:17:42.486938 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-01 23:17:42.488268 | orchestrator | Sunday 01 June 2025 23:17:42 +0000 (0:00:04.274) 0:00:49.820 *********** 2025-06-01 23:17:46.514001 | orchestrator | changed: [localhost] 2025-06-01 23:17:46.514166 | orchestrator | 2025-06-01 23:17:46.515320 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-01 23:17:46.515369 | orchestrator | Sunday 01 June 2025 23:17:46 +0000 (0:00:04.029) 0:00:53.849 *********** 2025-06-01 23:17:50.226493 | orchestrator | ok: [localhost] 2025-06-01 23:17:50.229249 | orchestrator | 2025-06-01 23:17:50.230076 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:17:50.230709 | orchestrator | 2025-06-01 23:17:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 23:17:50.232561 | orchestrator | 2025-06-01 23:17:50 | INFO  | Please wait and do not abort execution. 2025-06-01 23:17:50.232753 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:17:50.233079 | orchestrator | 2025-06-01 23:17:50.233442 | orchestrator | 2025-06-01 23:17:50.234300 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:17:50.235491 | orchestrator | Sunday 01 June 2025 23:17:50 +0000 (0:00:03.714) 0:00:57.563 *********** 2025-06-01 23:17:50.235966 | orchestrator | =============================================================================== 2025-06-01 23:17:50.236847 | orchestrator | Get volume type LUKS --------------------------------------------------- 10.06s 2025-06-01 23:17:50.237351 | orchestrator | Create volume type local ------------------------------------------------ 7.81s 2025-06-01 23:17:50.240677 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.15s 2025-06-01 23:17:50.240765 | orchestrator | Get volume type local --------------------------------------------------- 6.67s 2025-06-01 23:17:50.241169 | orchestrator | Set public network to default ------------------------------------------- 6.57s 2025-06-01 23:17:50.242404 | orchestrator | Create public network --------------------------------------------------- 5.22s 2025-06-01 23:17:50.244078 | orchestrator | Create public subnet ---------------------------------------------------- 4.27s 2025-06-01 23:17:50.246206 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.03s 2025-06-01 23:17:50.247751 | orchestrator | Create manager role ----------------------------------------------------- 3.71s 2025-06-01 23:17:50.249402 | orchestrator | Gathering Facts --------------------------------------------------------- 1.98s 2025-06-01 23:17:52.718344 | orchestrator | 2025-06-01 23:17:52 | INFO  | It takes a moment until task 361b6dcb-18ea-4a0c-a347-3bb441f5d65d (image-manager) has been started and output is visible here. 2025-06-01 23:17:56.306102 | orchestrator | 2025-06-01 23:17:56 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-01 23:17:56.524656 | orchestrator | 2025-06-01 23:17:56 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-01 23:17:56.525076 | orchestrator | 2025-06-01 23:17:56 | INFO  | Importing image Cirros 0.6.2 2025-06-01 23:17:56.528246 | orchestrator | 2025-06-01 23:17:56 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-01 23:17:58.108775 | orchestrator | 2025-06-01 23:17:58 | INFO  | Waiting for image to leave queued state... 2025-06-01 23:18:00.156699 | orchestrator | 2025-06-01 23:18:00 | INFO  | Waiting for import to complete... 2025-06-01 23:18:10.495958 | orchestrator | 2025-06-01 23:18:10 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-01 23:18:10.836805 | orchestrator | 2025-06-01 23:18:10 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-01 23:18:10.839211 | orchestrator | 2025-06-01 23:18:10 | INFO  | Setting internal_version = 0.6.2 2025-06-01 23:18:10.839248 | orchestrator | 2025-06-01 23:18:10 | INFO  | Setting image_original_user = cirros 2025-06-01 23:18:10.839581 | orchestrator | 2025-06-01 23:18:10 | INFO  | Adding tag os:cirros 2025-06-01 23:18:11.087054 | orchestrator | 2025-06-01 23:18:11 | INFO  | Setting property architecture: x86_64 2025-06-01 23:18:11.362854 | orchestrator | 2025-06-01 23:18:11 | INFO  | Setting property hw_disk_bus: scsi 2025-06-01 23:18:11.577234 | orchestrator | 2025-06-01 23:18:11 | INFO  | Setting property hw_rng_model: virtio 2025-06-01 23:18:11.752030 | orchestrator | 2025-06-01 23:18:11 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-01 23:18:11.950288 | orchestrator | 2025-06-01 23:18:11 | INFO  | Setting property hw_watchdog_action: reset 2025-06-01 23:18:12.132481 | orchestrator | 2025-06-01 23:18:12 | INFO  | Setting property hypervisor_type: qemu 2025-06-01 23:18:12.334006 | orchestrator | 2025-06-01 23:18:12 | INFO  | Setting property os_distro: cirros 2025-06-01 23:18:12.557680 | orchestrator | 2025-06-01 23:18:12 | INFO  | Setting property replace_frequency: never 2025-06-01 23:18:12.752465 | orchestrator | 2025-06-01 23:18:12 | INFO  | Setting property uuid_validity: none 2025-06-01 23:18:12.933299 | orchestrator | 2025-06-01 23:18:12 | INFO  | Setting property provided_until: none 2025-06-01 23:18:13.148416 | orchestrator | 2025-06-01 23:18:13 | INFO  | Setting property image_description: Cirros 2025-06-01 23:18:13.367692 | orchestrator | 2025-06-01 23:18:13 | INFO  | Setting property image_name: Cirros 2025-06-01 23:18:13.570621 | orchestrator | 2025-06-01 23:18:13 | INFO  | Setting property internal_version: 0.6.2 2025-06-01 23:18:13.798257 | orchestrator | 2025-06-01 23:18:13 | INFO  | Setting property image_original_user: cirros 2025-06-01 23:18:13.988830 | orchestrator | 2025-06-01 23:18:13 | INFO  | Setting property os_version: 0.6.2 2025-06-01 23:18:14.195225 | orchestrator | 2025-06-01 23:18:14 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-01 23:18:14.389786 | orchestrator | 2025-06-01 23:18:14 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-01 23:18:14.565049 | orchestrator | 2025-06-01 23:18:14 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-01 23:18:14.565986 | orchestrator | 2025-06-01 23:18:14 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-01 23:18:14.566674 | orchestrator | 2025-06-01 23:18:14 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-01 23:18:14.778223 | orchestrator | 2025-06-01 23:18:14 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-01 23:18:14.985441 | orchestrator | 2025-06-01 23:18:14 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-01 23:18:14.987081 | orchestrator | 2025-06-01 23:18:14 | INFO  | Importing image Cirros 0.6.3 2025-06-01 23:18:14.987468 | orchestrator | 2025-06-01 23:18:14 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-01 23:18:16.023592 | orchestrator | 2025-06-01 23:18:16 | INFO  | Waiting for image to leave queued state... 2025-06-01 23:18:18.078730 | orchestrator | 2025-06-01 23:18:18 | INFO  | Waiting for import to complete... 2025-06-01 23:18:28.219755 | orchestrator | 2025-06-01 23:18:28 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-01 23:18:28.477536 | orchestrator | 2025-06-01 23:18:28 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-01 23:18:28.478459 | orchestrator | 2025-06-01 23:18:28 | INFO  | Setting internal_version = 0.6.3 2025-06-01 23:18:28.479829 | orchestrator | 2025-06-01 23:18:28 | INFO  | Setting image_original_user = cirros 2025-06-01 23:18:28.480713 | orchestrator | 2025-06-01 23:18:28 | INFO  | Adding tag os:cirros 2025-06-01 23:18:28.732993 | orchestrator | 2025-06-01 23:18:28 | INFO  | Setting property architecture: x86_64 2025-06-01 23:18:28.955183 | orchestrator | 2025-06-01 23:18:28 | INFO  | Setting property hw_disk_bus: scsi 2025-06-01 23:18:29.248023 | orchestrator | 2025-06-01 23:18:29 | INFO  | Setting property hw_rng_model: virtio 2025-06-01 23:18:29.472565 | orchestrator | 2025-06-01 23:18:29 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-01 23:18:29.700757 | orchestrator | 2025-06-01 23:18:29 | INFO  | Setting property hw_watchdog_action: reset 2025-06-01 23:18:29.896394 | orchestrator | 2025-06-01 23:18:29 | INFO  | Setting property hypervisor_type: qemu 2025-06-01 23:18:30.102435 | orchestrator | 2025-06-01 23:18:30 | INFO  | Setting property os_distro: cirros 2025-06-01 23:18:30.340286 | orchestrator | 2025-06-01 23:18:30 | INFO  | Setting property replace_frequency: never 2025-06-01 23:18:30.523031 | orchestrator | 2025-06-01 23:18:30 | INFO  | Setting property uuid_validity: none 2025-06-01 23:18:30.701292 | orchestrator | 2025-06-01 23:18:30 | INFO  | Setting property provided_until: none 2025-06-01 23:18:30.942807 | orchestrator | 2025-06-01 23:18:30 | INFO  | Setting property image_description: Cirros 2025-06-01 23:18:31.177862 | orchestrator | 2025-06-01 23:18:31 | INFO  | Setting property image_name: Cirros 2025-06-01 23:18:31.577849 | orchestrator | 2025-06-01 23:18:31 | INFO  | Setting property internal_version: 0.6.3 2025-06-01 23:18:31.794336 | orchestrator | 2025-06-01 23:18:31 | INFO  | Setting property image_original_user: cirros 2025-06-01 23:18:31.969700 | orchestrator | 2025-06-01 23:18:31 | INFO  | Setting property os_version: 0.6.3 2025-06-01 23:18:32.161393 | orchestrator | 2025-06-01 23:18:32 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-01 23:18:32.383694 | orchestrator | 2025-06-01 23:18:32 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-01 23:18:32.600169 | orchestrator | 2025-06-01 23:18:32 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-01 23:18:32.600666 | orchestrator | 2025-06-01 23:18:32 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-01 23:18:32.601323 | orchestrator | 2025-06-01 23:18:32 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-01 23:18:33.788274 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-01 23:18:35.843261 | orchestrator | 2025-06-01 23:18:35 | INFO  | date: 2025-06-01 2025-06-01 23:18:35.843483 | orchestrator | 2025-06-01 23:18:35 | INFO  | image: octavia-amphora-haproxy-2024.2.20250601.qcow2 2025-06-01 23:18:35.843510 | orchestrator | 2025-06-01 23:18:35 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2 2025-06-01 23:18:35.843547 | orchestrator | 2025-06-01 23:18:35 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2.CHECKSUM 2025-06-01 23:18:35.885287 | orchestrator | 2025-06-01 23:18:35 | INFO  | checksum: 700471d784d62fa237f40333fe5c8c65dd56f28e7d4645bd524c044147a32271 2025-06-01 23:18:35.974094 | orchestrator | 2025-06-01 23:18:35 | INFO  | It takes a moment until task 4760f58f-3be7-4058-a738-b93e73a60a1d (image-manager) has been started and output is visible here. 2025-06-01 23:18:36.214777 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-01 23:18:36.215993 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-01 23:18:38.521649 | orchestrator | 2025-06-01 23:18:38 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-01' 2025-06-01 23:18:38.541630 | orchestrator | 2025-06-01 23:18:38 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2: 200 2025-06-01 23:18:38.542414 | orchestrator | 2025-06-01 23:18:38 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-01 2025-06-01 23:18:38.543466 | orchestrator | 2025-06-01 23:18:38 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2 2025-06-01 23:18:39.672140 | orchestrator | 2025-06-01 23:18:39 | INFO  | Waiting for image to leave queued state... 2025-06-01 23:18:41.738189 | orchestrator | 2025-06-01 23:18:41 | INFO  | Waiting for import to complete... 2025-06-01 23:18:51.829651 | orchestrator | 2025-06-01 23:18:51 | INFO  | Waiting for import to complete... 2025-06-01 23:19:01.914852 | orchestrator | 2025-06-01 23:19:01 | INFO  | Waiting for import to complete... 2025-06-01 23:19:12.023878 | orchestrator | 2025-06-01 23:19:12 | INFO  | Waiting for import to complete... 2025-06-01 23:19:22.346746 | orchestrator | 2025-06-01 23:19:22 | INFO  | Waiting for import to complete... 2025-06-01 23:19:32.474344 | orchestrator | 2025-06-01 23:19:32 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-01' successfully completed, reloading images 2025-06-01 23:19:32.788873 | orchestrator | 2025-06-01 23:19:32 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-01' 2025-06-01 23:19:32.789528 | orchestrator | 2025-06-01 23:19:32 | INFO  | Setting internal_version = 2025-06-01 2025-06-01 23:19:32.790095 | orchestrator | 2025-06-01 23:19:32 | INFO  | Setting image_original_user = ubuntu 2025-06-01 23:19:32.791052 | orchestrator | 2025-06-01 23:19:32 | INFO  | Adding tag amphora 2025-06-01 23:19:33.011165 | orchestrator | 2025-06-01 23:19:33 | INFO  | Adding tag os:ubuntu 2025-06-01 23:19:33.194398 | orchestrator | 2025-06-01 23:19:33 | INFO  | Setting property architecture: x86_64 2025-06-01 23:19:33.409181 | orchestrator | 2025-06-01 23:19:33 | INFO  | Setting property hw_disk_bus: scsi 2025-06-01 23:19:33.621688 | orchestrator | 2025-06-01 23:19:33 | INFO  | Setting property hw_rng_model: virtio 2025-06-01 23:19:33.817108 | orchestrator | 2025-06-01 23:19:33 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-01 23:19:34.072242 | orchestrator | 2025-06-01 23:19:34 | INFO  | Setting property hw_watchdog_action: reset 2025-06-01 23:19:34.267883 | orchestrator | 2025-06-01 23:19:34 | INFO  | Setting property hypervisor_type: qemu 2025-06-01 23:19:34.484124 | orchestrator | 2025-06-01 23:19:34 | INFO  | Setting property os_distro: ubuntu 2025-06-01 23:19:34.686873 | orchestrator | 2025-06-01 23:19:34 | INFO  | Setting property replace_frequency: quarterly 2025-06-01 23:19:34.861518 | orchestrator | 2025-06-01 23:19:34 | INFO  | Setting property uuid_validity: last-1 2025-06-01 23:19:35.082207 | orchestrator | 2025-06-01 23:19:35 | INFO  | Setting property provided_until: none 2025-06-01 23:19:35.297113 | orchestrator | 2025-06-01 23:19:35 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-01 23:19:35.495617 | orchestrator | 2025-06-01 23:19:35 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-01 23:19:35.669545 | orchestrator | 2025-06-01 23:19:35 | INFO  | Setting property internal_version: 2025-06-01 2025-06-01 23:19:35.878435 | orchestrator | 2025-06-01 23:19:35 | INFO  | Setting property image_original_user: ubuntu 2025-06-01 23:19:36.073421 | orchestrator | 2025-06-01 23:19:36 | INFO  | Setting property os_version: 2025-06-01 2025-06-01 23:19:36.251633 | orchestrator | 2025-06-01 23:19:36 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250601.qcow2 2025-06-01 23:19:36.461706 | orchestrator | 2025-06-01 23:19:36 | INFO  | Setting property image_build_date: 2025-06-01 2025-06-01 23:19:36.679049 | orchestrator | 2025-06-01 23:19:36 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-01' 2025-06-01 23:19:36.680173 | orchestrator | 2025-06-01 23:19:36 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-01' 2025-06-01 23:19:36.851609 | orchestrator | 2025-06-01 23:19:36 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-01 23:19:36.852689 | orchestrator | 2025-06-01 23:19:36 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-01 23:19:36.853964 | orchestrator | 2025-06-01 23:19:36 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-01 23:19:36.855148 | orchestrator | 2025-06-01 23:19:36 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-01 23:19:37.542695 | orchestrator | ok: Runtime: 0:03:01.702619 2025-06-01 23:19:37.560087 | 2025-06-01 23:19:37.560214 | TASK [Run checks] 2025-06-01 23:19:38.326516 | orchestrator | + set -e 2025-06-01 23:19:38.326724 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 23:19:38.326750 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 23:19:38.326773 | orchestrator | ++ INTERACTIVE=false 2025-06-01 23:19:38.326787 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 23:19:38.326800 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 23:19:38.326815 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-01 23:19:38.327407 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-01 23:19:38.333708 | orchestrator | 2025-06-01 23:19:38.333758 | orchestrator | # CHECK 2025-06-01 23:19:38.333770 | orchestrator | 2025-06-01 23:19:38.333782 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-01 23:19:38.333798 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-01 23:19:38.333810 | orchestrator | + echo 2025-06-01 23:19:38.333822 | orchestrator | + echo '# CHECK' 2025-06-01 23:19:38.333833 | orchestrator | + echo 2025-06-01 23:19:38.333848 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-01 23:19:38.334686 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-01 23:19:38.398479 | orchestrator | 2025-06-01 23:19:38.398549 | orchestrator | ## Containers @ testbed-manager 2025-06-01 23:19:38.398571 | orchestrator | 2025-06-01 23:19:38.398594 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-01 23:19:38.398614 | orchestrator | + echo 2025-06-01 23:19:38.398634 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-01 23:19:38.398654 | orchestrator | + echo 2025-06-01 23:19:38.398676 | orchestrator | + osism container testbed-manager ps 2025-06-01 23:19:40.578996 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-01 23:19:40.579127 | orchestrator | 8e86b36e804d registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_blackbox_exporter 2025-06-01 23:19:40.579152 | orchestrator | 57040b9402c3 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_alertmanager 2025-06-01 23:19:40.579173 | orchestrator | 77da32258e55 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-06-01 23:19:40.579185 | orchestrator | 2efd04d96429 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-06-01 23:19:40.579196 | orchestrator | e4f0d07583d2 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_server 2025-06-01 23:19:40.579209 | orchestrator | de0ad551ebb9 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 19 minutes ago Up 19 minutes cephclient 2025-06-01 23:19:40.579225 | orchestrator | 196e4b29eed9 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-01 23:19:40.579237 | orchestrator | 2eff276984d7 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-01 23:19:40.579248 | orchestrator | 792ff9f81065 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-01 23:19:40.579283 | orchestrator | a874f5a94875 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-06-01 23:19:40.579296 | orchestrator | 1b582155ca95 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2025-06-01 23:19:40.579308 | orchestrator | 5d9ba6b05bdd registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 33 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-06-01 23:19:40.579320 | orchestrator | cbd58fcda506 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-01 23:19:40.579336 | orchestrator | ed07c3d1cab8 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" 56 minutes ago Up 55 minutes (healthy) manager-inventory_reconciler-1 2025-06-01 23:19:40.579372 | orchestrator | 6102b3344e9a registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 55 minutes (healthy) ceph-ansible 2025-06-01 23:19:40.579384 | orchestrator | f38cacd33547 registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 55 minutes (healthy) osism-kubernetes 2025-06-01 23:19:40.579395 | orchestrator | ff4dcede3cc6 registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 55 minutes (healthy) kolla-ansible 2025-06-01 23:19:40.579406 | orchestrator | 41cf9d6d2860 registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" 56 minutes ago Up 55 minutes (healthy) osism-ansible 2025-06-01 23:19:40.579417 | orchestrator | 323385c7e602 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 56 minutes ago Up 56 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-01 23:19:40.579429 | orchestrator | 09020308daff registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 56 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-01 23:19:40.579440 | orchestrator | 7c42168737ae registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" 56 minutes ago Up 56 minutes (healthy) osismclient 2025-06-01 23:19:40.579451 | orchestrator | eb44390011ab registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 56 minutes (healthy) manager-flower-1 2025-06-01 23:19:40.579463 | orchestrator | c7faf2ca791f registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 56 minutes (healthy) manager-listener-1 2025-06-01 23:19:40.579483 | orchestrator | b2ff3741b2d2 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 56 minutes (healthy) manager-openstack-1 2025-06-01 23:19:40.579494 | orchestrator | 84c07b1c3abb registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 56 minutes (healthy) manager-beat-1 2025-06-01 23:19:40.579505 | orchestrator | ef0296eebe8b registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 56 minutes ago Up 56 minutes (healthy) 6379/tcp manager-redis-1 2025-06-01 23:19:40.579517 | orchestrator | af40af202fcc registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 56 minutes ago Up 56 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-01 23:19:40.579528 | orchestrator | 6789a0ff884d registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 56 minutes (healthy) manager-watchdog-1 2025-06-01 23:19:40.579540 | orchestrator | e1a00287df62 registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 57 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-01 23:19:40.878731 | orchestrator | 2025-06-01 23:19:40.878832 | orchestrator | ## Images @ testbed-manager 2025-06-01 23:19:40.878847 | orchestrator | 2025-06-01 23:19:40.878860 | orchestrator | + echo 2025-06-01 23:19:40.878872 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-01 23:19:40.878885 | orchestrator | + echo 2025-06-01 23:19:40.878896 | orchestrator | + osism container testbed-manager images 2025-06-01 23:19:43.037056 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-01 23:19:43.037198 | orchestrator | registry.osism.tech/osism/homer v25.05.2 322317afcf13 20 hours ago 11.5MB 2025-06-01 23:19:43.037218 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 f2fe5144a396 20 hours ago 225MB 2025-06-01 23:19:43.037295 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250530.0 73cd5a0acb2a 26 hours ago 574MB 2025-06-01 23:19:43.037311 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250531.0 eb6fb0ff8e52 27 hours ago 578MB 2025-06-01 23:19:43.037323 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-01 23:19:43.037336 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-01 23:19:43.037347 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-01 23:19:43.037359 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250530 48bb7d2c6b08 2 days ago 892MB 2025-06-01 23:19:43.037371 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250530 3d4c4d6fe7fa 2 days ago 361MB 2025-06-01 23:19:43.037383 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-01 23:19:43.037395 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-01 23:19:43.037431 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250530 0e447338580d 2 days ago 457MB 2025-06-01 23:19:43.037444 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250530.0 bce894afc91f 2 days ago 538MB 2025-06-01 23:19:43.037456 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250530.0 467731c31786 2 days ago 1.21GB 2025-06-01 23:19:43.037467 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250530.0 1b4e0cdc5cdd 2 days ago 308MB 2025-06-01 23:19:43.037478 | orchestrator | registry.osism.tech/osism/osism 0.20250530.0 bce098659f68 2 days ago 297MB 2025-06-01 23:19:43.037489 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 3 days ago 41.4MB 2025-06-01 23:19:43.037500 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 5 days ago 224MB 2025-06-01 23:19:43.037511 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 3 weeks ago 453MB 2025-06-01 23:19:43.037522 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 3 months ago 328MB 2025-06-01 23:19:43.037533 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-01 23:19:43.037544 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-01 23:19:43.037555 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-06-01 23:19:43.347782 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-01 23:19:43.348351 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-01 23:19:43.399445 | orchestrator | 2025-06-01 23:19:43.399529 | orchestrator | ## Containers @ testbed-node-0 2025-06-01 23:19:43.399544 | orchestrator | 2025-06-01 23:19:43.399558 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-01 23:19:43.399570 | orchestrator | + echo 2025-06-01 23:19:43.399582 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-01 23:19:43.399594 | orchestrator | + echo 2025-06-01 23:19:43.399606 | orchestrator | + osism container testbed-node-0 ps 2025-06-01 23:19:45.644524 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-01 23:19:45.644626 | orchestrator | 21cf32469e1a registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-01 23:19:45.644639 | orchestrator | 1cdc78a39ff3 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-01 23:19:45.644646 | orchestrator | 3ae5095784d8 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-01 23:19:45.644653 | orchestrator | fd49d7227351 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-01 23:19:45.644660 | orchestrator | adf7f0977626 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-01 23:19:45.644693 | orchestrator | a72ad1ac8829 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-01 23:19:45.644700 | orchestrator | c841c24f0842 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2025-06-01 23:19:45.644734 | orchestrator | 7ff7671bcbe5 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-06-01 23:19:45.644742 | orchestrator | c241b8a5d834 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-01 23:19:45.644749 | orchestrator | 00121649f139 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-06-01 23:19:45.644755 | orchestrator | ec526e1720ff registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-06-01 23:19:45.644762 | orchestrator | 674238768b75 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-06-01 23:19:45.644768 | orchestrator | 00635fbe4426 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-06-01 23:19:45.644774 | orchestrator | 56756ee1e126 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2025-06-01 23:19:45.644781 | orchestrator | 56aacd562aa3 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-06-01 23:19:45.644787 | orchestrator | 3c823628b836 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_novncproxy 2025-06-01 23:19:45.644794 | orchestrator | 25a764e1176c registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-06-01 23:19:45.644799 | orchestrator | 8bf1b77c950a registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-01 23:19:45.644805 | orchestrator | 1329ea2cb8e7 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2025-06-01 23:19:45.644829 | orchestrator | 7ce47b1d78b1 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2025-06-01 23:19:45.644836 | orchestrator | b87b7863711e registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-06-01 23:19:45.644842 | orchestrator | a1965608d020 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) nova_api 2025-06-01 23:19:45.644849 | orchestrator | 30832418b346 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-01 23:19:45.644855 | orchestrator | 9ba14016a9d5 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-06-01 23:19:45.644866 | orchestrator | 45ff83bacf9c registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2025-06-01 23:19:45.644878 | orchestrator | 5212be05fdff registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-06-01 23:19:45.644888 | orchestrator | 1e54b2afcc48 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-06-01 23:19:45.644895 | orchestrator | a576d1c21eff registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2025-06-01 23:19:45.644901 | orchestrator | 1990d428ea94 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-06-01 23:19:45.644941 | orchestrator | befc8fc5f7ce registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-06-01 23:19:45.644950 | orchestrator | 88fee66f7905 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-06-01 23:19:45.644956 | orchestrator | 1bc90909aac5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-0 2025-06-01 23:19:45.644963 | orchestrator | ae9b79460935 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-01 23:19:45.644969 | orchestrator | a748dd5e451c registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-01 23:19:45.644975 | orchestrator | 4adf2abc710e registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-01 23:19:45.644985 | orchestrator | 42d3be1b4a4d registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (unhealthy) horizon 2025-06-01 23:19:45.644992 | orchestrator | 3f51e3234c36 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-01 23:19:45.644998 | orchestrator | f11a70cb74ed registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-06-01 23:19:45.645005 | orchestrator | 69d26dd95b91 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2025-06-01 23:19:45.645011 | orchestrator | 27dc36b6e698 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-01 23:19:45.645025 | orchestrator | 81f6da9029cc registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2025-06-01 23:19:45.645032 | orchestrator | ad86f0a1215c registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-01 23:19:45.645039 | orchestrator | 477641134ae2 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-06-01 23:19:45.645052 | orchestrator | 3e8dc246c881 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2025-06-01 23:19:45.645058 | orchestrator | 8a9796b20d46 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-06-01 23:19:45.645064 | orchestrator | 0f345f57ed2a registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-06-01 23:19:45.645071 | orchestrator | 7aa5891ddc96 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-06-01 23:19:45.645077 | orchestrator | 93666e8ca21a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-0 2025-06-01 23:19:45.645083 | orchestrator | 2878c642c4b8 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) rabbitmq 2025-06-01 23:19:45.645090 | orchestrator | 009618063f42 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-01 23:19:45.645096 | orchestrator | bdbabf828e09 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-01 23:19:45.645103 | orchestrator | 41225e6898da registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-01 23:19:45.645109 | orchestrator | b652d4ff1041 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-01 23:19:45.645115 | orchestrator | c0c8b4e11d8b registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-01 23:19:45.645122 | orchestrator | 40b7b4bd7b7b registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-01 23:19:45.645128 | orchestrator | c3b7c918b88f registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-01 23:19:45.645134 | orchestrator | 86b9d690dfc8 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-01 23:19:45.990990 | orchestrator | 2025-06-01 23:19:45.991088 | orchestrator | ## Images @ testbed-node-0 2025-06-01 23:19:45.991100 | orchestrator | 2025-06-01 23:19:45.991109 | orchestrator | + echo 2025-06-01 23:19:45.991118 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-01 23:19:45.991129 | orchestrator | + echo 2025-06-01 23:19:45.991138 | orchestrator | + osism container testbed-node-0 images 2025-06-01 23:19:48.271495 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-01 23:19:48.271629 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-01 23:19:48.271645 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-01 23:19:48.272435 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-01 23:19:48.272498 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-01 23:19:48.272512 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-01 23:19:48.272524 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-01 23:19:48.272537 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-01 23:19:48.272568 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-01 23:19:48.272582 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-01 23:19:48.272594 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-01 23:19:48.272606 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-01 23:19:48.272618 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-01 23:19:48.272631 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-01 23:19:48.272644 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-01 23:19:48.272656 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-01 23:19:48.272669 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-01 23:19:48.272680 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-01 23:19:48.272691 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-01 23:19:48.272701 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-01 23:19:48.272712 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-01 23:19:48.272722 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-01 23:19:48.272733 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-01 23:19:48.272744 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-01 23:19:48.272755 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-01 23:19:48.272765 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-01 23:19:48.272776 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250530 ec3349a6437e 2 days ago 1.04GB 2025-06-01 23:19:48.272786 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250530 726d5cfde6f9 2 days ago 1.04GB 2025-06-01 23:19:48.272797 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250530 c2f966fc60ed 2 days ago 1.04GB 2025-06-01 23:19:48.272808 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250530 7c85bdb64788 2 days ago 1.04GB 2025-06-01 23:19:48.272818 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-01 23:19:48.272836 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-01 23:19:48.272875 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-01 23:19:48.272892 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-01 23:19:48.272904 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-01 23:19:48.272938 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-01 23:19:48.272950 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-01 23:19:48.272960 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-01 23:19:48.272971 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-01 23:19:48.272981 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-01 23:19:48.272992 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-01 23:19:48.273002 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-01 23:19:48.273013 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-01 23:19:48.273023 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-01 23:19:48.273034 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-01 23:19:48.273044 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250530 aa9066568160 2 days ago 1.04GB 2025-06-01 23:19:48.273055 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250530 546dea2f2472 2 days ago 1.04GB 2025-06-01 23:19:48.273065 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-01 23:19:48.273076 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-01 23:19:48.273086 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-01 23:19:48.273097 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-01 23:19:48.273107 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-01 23:19:48.273118 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-01 23:19:48.273128 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-01 23:19:48.273138 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-01 23:19:48.273149 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-01 23:19:48.273159 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-01 23:19:48.273177 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250530 df0a04869ff0 2 days ago 1.11GB 2025-06-01 23:19:48.273187 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250530 e1b2b0cc8e5c 2 days ago 1.12GB 2025-06-01 23:19:48.273198 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-01 23:19:48.273208 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-01 23:19:48.273225 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-01 23:19:48.273236 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-01 23:19:48.273247 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-01 23:19:48.616347 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-01 23:19:48.617356 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-01 23:19:48.684478 | orchestrator | 2025-06-01 23:19:48.684574 | orchestrator | ## Containers @ testbed-node-1 2025-06-01 23:19:48.684589 | orchestrator | 2025-06-01 23:19:48.684600 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-01 23:19:48.684612 | orchestrator | + echo 2025-06-01 23:19:48.684625 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-01 23:19:48.684637 | orchestrator | + echo 2025-06-01 23:19:48.684648 | orchestrator | + osism container testbed-node-1 ps 2025-06-01 23:19:50.923290 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-01 23:19:50.923391 | orchestrator | 9e9999261e67 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-01 23:19:50.923405 | orchestrator | d30c45c5f3c6 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-01 23:19:50.923414 | orchestrator | e986caaed472 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-01 23:19:50.923423 | orchestrator | 3c60d6537ad6 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-01 23:19:50.923432 | orchestrator | f386903de6c6 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-01 23:19:50.923441 | orchestrator | a4420deba70c registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-01 23:19:50.923450 | orchestrator | 5858c64586c9 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2025-06-01 23:19:50.923459 | orchestrator | 8509a46d6bd9 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-06-01 23:19:50.923468 | orchestrator | 27bd670a83ad registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-01 23:19:50.923477 | orchestrator | 12cd88487113 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-06-01 23:19:50.923510 | orchestrator | d122d51a80cb registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-06-01 23:19:50.923520 | orchestrator | 3d3b1dc0a425 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-06-01 23:19:50.923528 | orchestrator | 8fdff4969c2e registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2025-06-01 23:19:50.923537 | orchestrator | 77a820dbe112 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-06-01 23:19:50.923546 | orchestrator | 06c92916e36e registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_novncproxy 2025-06-01 23:19:50.923555 | orchestrator | 1bb1652d72a9 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-06-01 23:19:50.923582 | orchestrator | 0af279a4661e registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-06-01 23:19:50.923602 | orchestrator | 8e5c550cdabe registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-01 23:19:50.923617 | orchestrator | 1ef23acd18c0 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2025-06-01 23:19:50.923650 | orchestrator | 861b615cfd08 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2025-06-01 23:19:50.923666 | orchestrator | a62cb6e558d9 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-06-01 23:19:50.923676 | orchestrator | 4551b61194fb registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) nova_api 2025-06-01 23:19:50.923685 | orchestrator | 1ae47e245554 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-01 23:19:50.923694 | orchestrator | dce675f3a44a registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2025-06-01 23:19:50.923703 | orchestrator | d2215ec2ec16 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-06-01 23:19:50.923711 | orchestrator | 258fdf60b66f registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2025-06-01 23:19:50.923723 | orchestrator | 4976e1c644ee registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2025-06-01 23:19:50.923732 | orchestrator | 7451047f898c registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-06-01 23:19:50.923749 | orchestrator | 887397acbe59 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-06-01 23:19:50.923759 | orchestrator | c35c582df169 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-06-01 23:19:50.923768 | orchestrator | 8ae3b8a5936b registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-06-01 23:19:50.923777 | orchestrator | 05fe4db61ee7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2025-06-01 23:19:50.923785 | orchestrator | d1f39b4a78ef registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-01 23:19:50.923795 | orchestrator | 439432fbd434 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (unhealthy) horizon 2025-06-01 23:19:50.923803 | orchestrator | 534b2ed63a11 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-01 23:19:50.923812 | orchestrator | df6d338363f8 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-01 23:19:50.923821 | orchestrator | 561363696f86 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-01 23:19:50.923830 | orchestrator | 39e6ba90e1b0 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 23 minutes ago Up 22 minutes (healthy) mariadb 2025-06-01 23:19:50.923840 | orchestrator | c5eff5079c9d registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-06-01 23:19:50.923856 | orchestrator | 2d662e6afb41 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-01 23:19:50.923873 | orchestrator | 9950f52e4dcc registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-06-01 23:19:50.923884 | orchestrator | c06c603b0c4e registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 25 minutes ago Up 24 minutes (healthy) proxysql 2025-06-01 23:19:50.923893 | orchestrator | 4a2b0de1f96b registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-06-01 23:19:50.923904 | orchestrator | 8d73ec557c17 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2025-06-01 23:19:50.923940 | orchestrator | f97160886511 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2025-06-01 23:19:50.923952 | orchestrator | ef1f44f33597 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-06-01 23:19:50.923963 | orchestrator | 122d1ba87086 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-01 23:19:50.923978 | orchestrator | b34be28a961a registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-06-01 23:19:50.923989 | orchestrator | 0b90b514015e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-1 2025-06-01 23:19:50.923999 | orchestrator | 522da4865948 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-06-01 23:19:50.924029 | orchestrator | 504bd4f88029 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-01 23:19:50.924039 | orchestrator | 5dc6ccb623ec registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-01 23:19:50.924053 | orchestrator | f34312fe045b registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-01 23:19:50.924069 | orchestrator | 956c48a1aac4 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-01 23:19:50.924083 | orchestrator | 71c47ef3fefa registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-01 23:19:50.924098 | orchestrator | 59e72bfff56a registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-01 23:19:50.924114 | orchestrator | e09da5c8d2f4 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-01 23:19:51.254258 | orchestrator | 2025-06-01 23:19:51.254366 | orchestrator | ## Images @ testbed-node-1 2025-06-01 23:19:51.254383 | orchestrator | 2025-06-01 23:19:51.254395 | orchestrator | + echo 2025-06-01 23:19:51.254407 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-01 23:19:51.254420 | orchestrator | + echo 2025-06-01 23:19:51.254431 | orchestrator | + osism container testbed-node-1 images 2025-06-01 23:19:53.415706 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-01 23:19:53.416552 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-01 23:19:53.416586 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-01 23:19:53.416599 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-01 23:19:53.416611 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-01 23:19:53.416622 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-01 23:19:53.416633 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-01 23:19:53.416644 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-01 23:19:53.416655 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-01 23:19:53.416687 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-01 23:19:53.416700 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-01 23:19:53.416711 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-01 23:19:53.416738 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-01 23:19:53.416750 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-01 23:19:53.416761 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-01 23:19:53.416772 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-01 23:19:53.416783 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-01 23:19:53.416794 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-01 23:19:53.416804 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-01 23:19:53.416815 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-01 23:19:53.416826 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-01 23:19:53.416836 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-01 23:19:53.416847 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-01 23:19:53.416863 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-01 23:19:53.416874 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-01 23:19:53.416884 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-01 23:19:53.416895 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-01 23:19:53.416906 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-01 23:19:53.416944 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-01 23:19:53.416963 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-01 23:19:53.416975 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-01 23:19:53.416986 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-01 23:19:53.417017 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-01 23:19:53.417029 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-01 23:19:53.417039 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-01 23:19:53.417050 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-01 23:19:53.417077 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-01 23:19:53.417088 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-01 23:19:53.417099 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-01 23:19:53.417110 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-01 23:19:53.417120 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-01 23:19:53.417131 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-01 23:19:53.417143 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-01 23:19:53.417154 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-01 23:19:53.417165 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-01 23:19:53.417175 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-01 23:19:53.417186 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-01 23:19:53.417197 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-01 23:19:53.417207 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-01 23:19:53.417218 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-01 23:19:53.417229 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-01 23:19:53.417240 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-01 23:19:53.417250 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-01 23:19:53.417261 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-01 23:19:53.417272 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-01 23:19:53.417283 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-01 23:19:53.717337 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-01 23:19:53.718011 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-01 23:19:53.785053 | orchestrator | 2025-06-01 23:19:53.785123 | orchestrator | ## Containers @ testbed-node-2 2025-06-01 23:19:53.785136 | orchestrator | 2025-06-01 23:19:53.785148 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-01 23:19:53.785159 | orchestrator | + echo 2025-06-01 23:19:53.785189 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-01 23:19:53.785202 | orchestrator | + echo 2025-06-01 23:19:53.785235 | orchestrator | + osism container testbed-node-2 ps 2025-06-01 23:19:56.154531 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-01 23:19:56.154637 | orchestrator | 7d92a9148938 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-01 23:19:56.154676 | orchestrator | 7c99211613dc registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-01 23:19:56.154688 | orchestrator | 9cec01b408a1 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-01 23:19:56.154699 | orchestrator | 5049301b7545 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-01 23:19:56.154709 | orchestrator | dff0e50e6ec7 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-01 23:19:56.154721 | orchestrator | 3e9b92498dfb registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-01 23:19:56.154732 | orchestrator | 384cf33619b6 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2025-06-01 23:19:56.154743 | orchestrator | 65b3eddbaaa5 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-06-01 23:19:56.154754 | orchestrator | 817fcc041469 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2025-06-01 23:19:56.154765 | orchestrator | 5c721c7e6506 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-06-01 23:19:56.154776 | orchestrator | 173ae8bdb0a3 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-06-01 23:19:56.154787 | orchestrator | ed33a3ab5c62 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-06-01 23:19:56.154798 | orchestrator | 7a6f54a83b69 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2025-06-01 23:19:56.154808 | orchestrator | 78a2757c2249 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-06-01 23:19:56.154819 | orchestrator | 5a0fda1aa046 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) nova_novncproxy 2025-06-01 23:19:56.154830 | orchestrator | 84543dfb7e29 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_api 2025-06-01 23:19:56.154841 | orchestrator | 0cbb5ac6cc90 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-06-01 23:19:56.154852 | orchestrator | 863965358e8f registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-01 23:19:56.154863 | orchestrator | 6117f3539e2c registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2025-06-01 23:19:56.154898 | orchestrator | 809507df9602 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2025-06-01 23:19:56.154911 | orchestrator | 50b7ed41a780 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_api 2025-06-01 23:19:56.154953 | orchestrator | c8efd1c9d1cb registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) nova_api 2025-06-01 23:19:56.154965 | orchestrator | 0dbd4046ee22 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-06-01 23:19:56.154976 | orchestrator | cb017fde8179 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) glance_api 2025-06-01 23:19:56.154987 | orchestrator | ab608ac617cd registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_scheduler 2025-06-01 23:19:56.154998 | orchestrator | 38bd5fa01911 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2025-06-01 23:19:56.155011 | orchestrator | 5dea112de0a1 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2025-06-01 23:19:56.155022 | orchestrator | 5881d9e17708 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-06-01 23:19:56.155034 | orchestrator | a152baa7610c registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-06-01 23:19:56.155045 | orchestrator | 88aea2029b48 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-06-01 23:19:56.155076 | orchestrator | 2094087f1068 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-06-01 23:19:56.155090 | orchestrator | 6c1627ca931d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-06-01 23:19:56.155103 | orchestrator | 132f6f9f00cb registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-01 23:19:56.155116 | orchestrator | 7c9906bed184 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (unhealthy) horizon 2025-06-01 23:19:56.155128 | orchestrator | b39e305ffe58 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-01 23:19:56.155141 | orchestrator | ba93ef7fd485 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-01 23:19:56.155155 | orchestrator | 51918459f894 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-01 23:19:56.155175 | orchestrator | 5abffeef6cc3 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-06-01 23:19:56.155187 | orchestrator | 799d92ac64e5 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-06-01 23:19:56.155206 | orchestrator | 13231056a1f6 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-01 23:19:56.155224 | orchestrator | 826bfb1b607a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-2 2025-06-01 23:19:56.155236 | orchestrator | 59aee1f55723 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-06-01 23:19:56.155247 | orchestrator | d70094c32500 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-06-01 23:19:56.155258 | orchestrator | 4910b646d889 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2025-06-01 23:19:56.155268 | orchestrator | 5e748d5ab5ed registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2025-06-01 23:19:56.155279 | orchestrator | cc6ca8c676c4 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-06-01 23:19:56.155290 | orchestrator | 345fe9353213 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-01 23:19:56.155301 | orchestrator | 43ce7f5b7a35 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-01 23:19:56.155312 | orchestrator | cb31e643391f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-2 2025-06-01 23:19:56.155323 | orchestrator | e2eb9531f755 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-06-01 23:19:56.155333 | orchestrator | 912f48a1366e registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-01 23:19:56.155344 | orchestrator | 0fc121753595 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-01 23:19:56.155355 | orchestrator | 7e3a8d93ffaa registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-01 23:19:56.155366 | orchestrator | 4cc041bd92dd registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-06-01 23:19:56.155377 | orchestrator | af8b779c84db registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-01 23:19:56.155394 | orchestrator | e13195f38407 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-01 23:19:56.155405 | orchestrator | 5b3e0f4bf92e registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-01 23:19:56.480293 | orchestrator | 2025-06-01 23:19:56.480394 | orchestrator | ## Images @ testbed-node-2 2025-06-01 23:19:56.480410 | orchestrator | 2025-06-01 23:19:56.480422 | orchestrator | + echo 2025-06-01 23:19:56.480434 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-01 23:19:56.480446 | orchestrator | + echo 2025-06-01 23:19:56.480458 | orchestrator | + osism container testbed-node-2 images 2025-06-01 23:19:58.830875 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-01 23:19:58.831061 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-01 23:19:58.831078 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-01 23:19:58.831091 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-01 23:19:58.831103 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-01 23:19:58.831114 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-01 23:19:58.831125 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-01 23:19:58.831136 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-01 23:19:58.831147 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-01 23:19:58.831158 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-01 23:19:58.831170 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-01 23:19:58.831181 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-01 23:19:58.831192 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-01 23:19:58.831203 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-01 23:19:58.831214 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-01 23:19:58.831245 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-01 23:19:58.831257 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-01 23:19:58.831268 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-01 23:19:58.831279 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-01 23:19:58.831290 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-01 23:19:58.831301 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-01 23:19:58.831312 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-01 23:19:58.831345 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-01 23:19:58.831357 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-01 23:19:58.831367 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-01 23:19:58.831378 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-01 23:19:58.831389 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-01 23:19:58.831400 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-01 23:19:58.831412 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-01 23:19:58.831424 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-01 23:19:58.831437 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-01 23:19:58.831449 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-01 23:19:58.831479 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-01 23:19:58.831492 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-01 23:19:58.831504 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-01 23:19:58.831516 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-01 23:19:58.831533 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-01 23:19:58.831546 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-01 23:19:58.831559 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-01 23:19:58.831571 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-01 23:19:58.831583 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-01 23:19:58.831595 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-01 23:19:58.831607 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-01 23:19:58.831620 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-01 23:19:58.831632 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-01 23:19:58.831644 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-01 23:19:58.831657 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-01 23:19:58.831670 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-01 23:19:58.831690 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-01 23:19:58.831703 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-01 23:19:58.831715 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-01 23:19:58.831727 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-01 23:19:58.831739 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-01 23:19:58.831751 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-01 23:19:58.831763 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-01 23:19:58.831776 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-01 23:19:59.155584 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-01 23:19:59.164680 | orchestrator | + set -e 2025-06-01 23:19:59.164720 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 23:19:59.166226 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 23:19:59.166326 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 23:19:59.166343 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 23:19:59.166355 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 23:19:59.166367 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 23:19:59.166379 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 23:19:59.166390 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-01 23:19:59.166401 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-01 23:19:59.166412 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 23:19:59.166423 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 23:19:59.166434 | orchestrator | ++ export ARA=false 2025-06-01 23:19:59.166445 | orchestrator | ++ ARA=false 2025-06-01 23:19:59.166456 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 23:19:59.166466 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 23:19:59.166477 | orchestrator | ++ export TEMPEST=false 2025-06-01 23:19:59.166488 | orchestrator | ++ TEMPEST=false 2025-06-01 23:19:59.166498 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 23:19:59.166509 | orchestrator | ++ IS_ZUUL=true 2025-06-01 23:19:59.166525 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-06-01 23:19:59.166536 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-06-01 23:19:59.166548 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 23:19:59.166559 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 23:19:59.166569 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 23:19:59.166580 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 23:19:59.166590 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 23:19:59.166601 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 23:19:59.166612 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 23:19:59.166622 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 23:19:59.166633 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-01 23:19:59.166644 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-01 23:19:59.176674 | orchestrator | + set -e 2025-06-01 23:19:59.176722 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 23:19:59.176735 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 23:19:59.176747 | orchestrator | ++ INTERACTIVE=false 2025-06-01 23:19:59.176758 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 23:19:59.176769 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 23:19:59.176780 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-01 23:19:59.178109 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-01 23:19:59.183790 | orchestrator | 2025-06-01 23:19:59.183816 | orchestrator | # Ceph status 2025-06-01 23:19:59.183828 | orchestrator | 2025-06-01 23:19:59.183839 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-01 23:19:59.183850 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-01 23:19:59.183861 | orchestrator | + echo 2025-06-01 23:19:59.183872 | orchestrator | + echo '# Ceph status' 2025-06-01 23:19:59.183883 | orchestrator | + echo 2025-06-01 23:19:59.183966 | orchestrator | + ceph -s 2025-06-01 23:19:59.802224 | orchestrator | cluster: 2025-06-01 23:19:59.802323 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-01 23:19:59.802339 | orchestrator | health: HEALTH_OK 2025-06-01 23:19:59.802352 | orchestrator | 2025-06-01 23:19:59.802364 | orchestrator | services: 2025-06-01 23:19:59.802376 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 29m) 2025-06-01 23:19:59.802389 | orchestrator | mgr: testbed-node-0(active, since 17m), standbys: testbed-node-2, testbed-node-1 2025-06-01 23:19:59.802401 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-01 23:19:59.802412 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 26m) 2025-06-01 23:19:59.802424 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-01 23:19:59.802435 | orchestrator | 2025-06-01 23:19:59.802446 | orchestrator | data: 2025-06-01 23:19:59.802457 | orchestrator | volumes: 1/1 healthy 2025-06-01 23:19:59.802468 | orchestrator | pools: 14 pools, 401 pgs 2025-06-01 23:19:59.802479 | orchestrator | objects: 524 objects, 2.2 GiB 2025-06-01 23:19:59.802491 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-01 23:19:59.802502 | orchestrator | pgs: 401 active+clean 2025-06-01 23:19:59.802513 | orchestrator | 2025-06-01 23:19:59.853025 | orchestrator | 2025-06-01 23:19:59.853110 | orchestrator | # Ceph versions 2025-06-01 23:19:59.853124 | orchestrator | 2025-06-01 23:19:59.853136 | orchestrator | + echo 2025-06-01 23:19:59.853148 | orchestrator | + echo '# Ceph versions' 2025-06-01 23:19:59.853161 | orchestrator | + echo 2025-06-01 23:19:59.853172 | orchestrator | + ceph versions 2025-06-01 23:20:00.467711 | orchestrator | { 2025-06-01 23:20:00.467863 | orchestrator | "mon": { 2025-06-01 23:20:00.467882 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-01 23:20:00.467895 | orchestrator | }, 2025-06-01 23:20:00.467909 | orchestrator | "mgr": { 2025-06-01 23:20:00.467995 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-01 23:20:00.468015 | orchestrator | }, 2025-06-01 23:20:00.468039 | orchestrator | "osd": { 2025-06-01 23:20:00.468066 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-01 23:20:00.468081 | orchestrator | }, 2025-06-01 23:20:00.468098 | orchestrator | "mds": { 2025-06-01 23:20:00.468115 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-01 23:20:00.468131 | orchestrator | }, 2025-06-01 23:20:00.468148 | orchestrator | "rgw": { 2025-06-01 23:20:00.468165 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-01 23:20:00.468182 | orchestrator | }, 2025-06-01 23:20:00.468201 | orchestrator | "overall": { 2025-06-01 23:20:00.468217 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-01 23:20:00.468229 | orchestrator | } 2025-06-01 23:20:00.468240 | orchestrator | } 2025-06-01 23:20:00.527229 | orchestrator | 2025-06-01 23:20:00.527311 | orchestrator | # Ceph OSD tree 2025-06-01 23:20:00.527325 | orchestrator | 2025-06-01 23:20:00.527336 | orchestrator | + echo 2025-06-01 23:20:00.527348 | orchestrator | + echo '# Ceph OSD tree' 2025-06-01 23:20:00.527360 | orchestrator | + echo 2025-06-01 23:20:00.527371 | orchestrator | + ceph osd df tree 2025-06-01 23:20:01.084566 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-01 23:20:01.084688 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-06-01 23:20:01.084703 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2025-06-01 23:20:01.084714 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1019 MiB 1 KiB 70 MiB 19 GiB 5.32 0.90 189 up osd.0 2025-06-01 23:20:01.084726 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.51 1.10 201 up osd.3 2025-06-01 23:20:01.084737 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-01 23:20:01.084748 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.67 1.13 192 up osd.1 2025-06-01 23:20:01.084780 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.0 GiB 987 MiB 1 KiB 70 MiB 19 GiB 5.16 0.87 196 up osd.4 2025-06-01 23:20:01.084791 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-01 23:20:01.084802 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 7.04 1.19 205 up osd.2 2025-06-01 23:20:01.084813 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 980 MiB 907 MiB 1 KiB 74 MiB 19 GiB 4.79 0.81 187 up osd.5 2025-06-01 23:20:01.084823 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-06-01 23:20:01.084834 | orchestrator | MIN/MAX VAR: 0.81/1.19 STDDEV: 0.85 2025-06-01 23:20:01.132176 | orchestrator | 2025-06-01 23:20:01.132228 | orchestrator | # Ceph monitor status 2025-06-01 23:20:01.132242 | orchestrator | 2025-06-01 23:20:01.132254 | orchestrator | + echo 2025-06-01 23:20:01.132265 | orchestrator | + echo '# Ceph monitor status' 2025-06-01 23:20:01.132276 | orchestrator | + echo 2025-06-01 23:20:01.132287 | orchestrator | + ceph mon stat 2025-06-01 23:20:01.801579 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-01 23:20:01.860267 | orchestrator | 2025-06-01 23:20:01.860353 | orchestrator | # Ceph quorum status 2025-06-01 23:20:01.860368 | orchestrator | 2025-06-01 23:20:01.860380 | orchestrator | + echo 2025-06-01 23:20:01.860392 | orchestrator | + echo '# Ceph quorum status' 2025-06-01 23:20:01.860403 | orchestrator | + echo 2025-06-01 23:20:01.861547 | orchestrator | + ceph quorum_status 2025-06-01 23:20:01.861580 | orchestrator | + jq 2025-06-01 23:20:02.596111 | orchestrator | { 2025-06-01 23:20:02.596205 | orchestrator | "election_epoch": 4, 2025-06-01 23:20:02.596220 | orchestrator | "quorum": [ 2025-06-01 23:20:02.596232 | orchestrator | 0, 2025-06-01 23:20:02.596243 | orchestrator | 1, 2025-06-01 23:20:02.596254 | orchestrator | 2 2025-06-01 23:20:02.596265 | orchestrator | ], 2025-06-01 23:20:02.596275 | orchestrator | "quorum_names": [ 2025-06-01 23:20:02.596286 | orchestrator | "testbed-node-0", 2025-06-01 23:20:02.596297 | orchestrator | "testbed-node-1", 2025-06-01 23:20:02.596308 | orchestrator | "testbed-node-2" 2025-06-01 23:20:02.596319 | orchestrator | ], 2025-06-01 23:20:02.596330 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-01 23:20:02.596342 | orchestrator | "quorum_age": 1766, 2025-06-01 23:20:02.596353 | orchestrator | "features": { 2025-06-01 23:20:02.596364 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-01 23:20:02.596375 | orchestrator | "quorum_mon": [ 2025-06-01 23:20:02.596386 | orchestrator | "kraken", 2025-06-01 23:20:02.596396 | orchestrator | "luminous", 2025-06-01 23:20:02.596407 | orchestrator | "mimic", 2025-06-01 23:20:02.596418 | orchestrator | "osdmap-prune", 2025-06-01 23:20:02.596428 | orchestrator | "nautilus", 2025-06-01 23:20:02.596439 | orchestrator | "octopus", 2025-06-01 23:20:02.596450 | orchestrator | "pacific", 2025-06-01 23:20:02.596460 | orchestrator | "elector-pinging", 2025-06-01 23:20:02.596471 | orchestrator | "quincy", 2025-06-01 23:20:02.596481 | orchestrator | "reef" 2025-06-01 23:20:02.596492 | orchestrator | ] 2025-06-01 23:20:02.596503 | orchestrator | }, 2025-06-01 23:20:02.596514 | orchestrator | "monmap": { 2025-06-01 23:20:02.596525 | orchestrator | "epoch": 1, 2025-06-01 23:20:02.596536 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-01 23:20:02.596547 | orchestrator | "modified": "2025-06-01T22:50:23.489997Z", 2025-06-01 23:20:02.596558 | orchestrator | "created": "2025-06-01T22:50:23.489997Z", 2025-06-01 23:20:02.596569 | orchestrator | "min_mon_release": 18, 2025-06-01 23:20:02.596580 | orchestrator | "min_mon_release_name": "reef", 2025-06-01 23:20:02.596590 | orchestrator | "election_strategy": 1, 2025-06-01 23:20:02.596618 | orchestrator | "disallowed_leaders: ": "", 2025-06-01 23:20:02.596629 | orchestrator | "stretch_mode": false, 2025-06-01 23:20:02.596640 | orchestrator | "tiebreaker_mon": "", 2025-06-01 23:20:02.596650 | orchestrator | "removed_ranks: ": "", 2025-06-01 23:20:02.596661 | orchestrator | "features": { 2025-06-01 23:20:02.596672 | orchestrator | "persistent": [ 2025-06-01 23:20:02.596684 | orchestrator | "kraken", 2025-06-01 23:20:02.596696 | orchestrator | "luminous", 2025-06-01 23:20:02.596732 | orchestrator | "mimic", 2025-06-01 23:20:02.596744 | orchestrator | "osdmap-prune", 2025-06-01 23:20:02.596756 | orchestrator | "nautilus", 2025-06-01 23:20:02.596769 | orchestrator | "octopus", 2025-06-01 23:20:02.596781 | orchestrator | "pacific", 2025-06-01 23:20:02.596793 | orchestrator | "elector-pinging", 2025-06-01 23:20:02.596806 | orchestrator | "quincy", 2025-06-01 23:20:02.596818 | orchestrator | "reef" 2025-06-01 23:20:02.596830 | orchestrator | ], 2025-06-01 23:20:02.596843 | orchestrator | "optional": [] 2025-06-01 23:20:02.596855 | orchestrator | }, 2025-06-01 23:20:02.596868 | orchestrator | "mons": [ 2025-06-01 23:20:02.596881 | orchestrator | { 2025-06-01 23:20:02.596893 | orchestrator | "rank": 0, 2025-06-01 23:20:02.596905 | orchestrator | "name": "testbed-node-0", 2025-06-01 23:20:02.596943 | orchestrator | "public_addrs": { 2025-06-01 23:20:02.596958 | orchestrator | "addrvec": [ 2025-06-01 23:20:02.596970 | orchestrator | { 2025-06-01 23:20:02.596983 | orchestrator | "type": "v2", 2025-06-01 23:20:02.596996 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-01 23:20:02.597009 | orchestrator | "nonce": 0 2025-06-01 23:20:02.597021 | orchestrator | }, 2025-06-01 23:20:02.597032 | orchestrator | { 2025-06-01 23:20:02.597043 | orchestrator | "type": "v1", 2025-06-01 23:20:02.597054 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-01 23:20:02.597064 | orchestrator | "nonce": 0 2025-06-01 23:20:02.597075 | orchestrator | } 2025-06-01 23:20:02.597085 | orchestrator | ] 2025-06-01 23:20:02.597096 | orchestrator | }, 2025-06-01 23:20:02.597106 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-01 23:20:02.597117 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-01 23:20:02.597128 | orchestrator | "priority": 0, 2025-06-01 23:20:02.597138 | orchestrator | "weight": 0, 2025-06-01 23:20:02.597149 | orchestrator | "crush_location": "{}" 2025-06-01 23:20:02.597159 | orchestrator | }, 2025-06-01 23:20:02.597170 | orchestrator | { 2025-06-01 23:20:02.597181 | orchestrator | "rank": 1, 2025-06-01 23:20:02.597191 | orchestrator | "name": "testbed-node-1", 2025-06-01 23:20:02.597202 | orchestrator | "public_addrs": { 2025-06-01 23:20:02.597213 | orchestrator | "addrvec": [ 2025-06-01 23:20:02.597223 | orchestrator | { 2025-06-01 23:20:02.597234 | orchestrator | "type": "v2", 2025-06-01 23:20:02.597244 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-01 23:20:02.597255 | orchestrator | "nonce": 0 2025-06-01 23:20:02.597265 | orchestrator | }, 2025-06-01 23:20:02.597276 | orchestrator | { 2025-06-01 23:20:02.597287 | orchestrator | "type": "v1", 2025-06-01 23:20:02.597297 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-01 23:20:02.597308 | orchestrator | "nonce": 0 2025-06-01 23:20:02.597318 | orchestrator | } 2025-06-01 23:20:02.597329 | orchestrator | ] 2025-06-01 23:20:02.597339 | orchestrator | }, 2025-06-01 23:20:02.597350 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-01 23:20:02.597361 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-01 23:20:02.597371 | orchestrator | "priority": 0, 2025-06-01 23:20:02.597382 | orchestrator | "weight": 0, 2025-06-01 23:20:02.597392 | orchestrator | "crush_location": "{}" 2025-06-01 23:20:02.597403 | orchestrator | }, 2025-06-01 23:20:02.597413 | orchestrator | { 2025-06-01 23:20:02.597424 | orchestrator | "rank": 2, 2025-06-01 23:20:02.597435 | orchestrator | "name": "testbed-node-2", 2025-06-01 23:20:02.597445 | orchestrator | "public_addrs": { 2025-06-01 23:20:02.597456 | orchestrator | "addrvec": [ 2025-06-01 23:20:02.597467 | orchestrator | { 2025-06-01 23:20:02.597477 | orchestrator | "type": "v2", 2025-06-01 23:20:02.597488 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-01 23:20:02.597498 | orchestrator | "nonce": 0 2025-06-01 23:20:02.597509 | orchestrator | }, 2025-06-01 23:20:02.597520 | orchestrator | { 2025-06-01 23:20:02.597530 | orchestrator | "type": "v1", 2025-06-01 23:20:02.597541 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-01 23:20:02.597551 | orchestrator | "nonce": 0 2025-06-01 23:20:02.597562 | orchestrator | } 2025-06-01 23:20:02.597573 | orchestrator | ] 2025-06-01 23:20:02.597583 | orchestrator | }, 2025-06-01 23:20:02.597594 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-01 23:20:02.597604 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-01 23:20:02.597615 | orchestrator | "priority": 0, 2025-06-01 23:20:02.597634 | orchestrator | "weight": 0, 2025-06-01 23:20:02.597645 | orchestrator | "crush_location": "{}" 2025-06-01 23:20:02.597655 | orchestrator | } 2025-06-01 23:20:02.597666 | orchestrator | ] 2025-06-01 23:20:02.597677 | orchestrator | } 2025-06-01 23:20:02.597688 | orchestrator | } 2025-06-01 23:20:02.597698 | orchestrator | 2025-06-01 23:20:02.597709 | orchestrator | # Ceph free space status 2025-06-01 23:20:02.597720 | orchestrator | 2025-06-01 23:20:02.597731 | orchestrator | + echo 2025-06-01 23:20:02.597742 | orchestrator | + echo '# Ceph free space status' 2025-06-01 23:20:02.597753 | orchestrator | + echo 2025-06-01 23:20:02.597763 | orchestrator | + ceph df 2025-06-01 23:20:03.249410 | orchestrator | --- RAW STORAGE --- 2025-06-01 23:20:03.249506 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-01 23:20:03.249535 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-01 23:20:03.249547 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-01 23:20:03.249558 | orchestrator | 2025-06-01 23:20:03.249570 | orchestrator | --- POOLS --- 2025-06-01 23:20:03.249582 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-01 23:20:03.249594 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-06-01 23:20:03.249607 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-01 23:20:03.249618 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-01 23:20:03.249629 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-01 23:20:03.249640 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-01 23:20:03.249651 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-01 23:20:03.249662 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-01 23:20:03.249673 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-01 23:20:03.249683 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-06-01 23:20:03.249694 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-01 23:20:03.249705 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-01 23:20:03.249716 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2025-06-01 23:20:03.249727 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-01 23:20:03.249738 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-01 23:20:03.303351 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-01 23:20:03.376375 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-01 23:20:03.376470 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-01 23:20:03.376553 | orchestrator | + osism apply facts 2025-06-01 23:20:05.095483 | orchestrator | Registering Redlock._acquired_script 2025-06-01 23:20:05.095595 | orchestrator | Registering Redlock._extend_script 2025-06-01 23:20:05.095611 | orchestrator | Registering Redlock._release_script 2025-06-01 23:20:05.157327 | orchestrator | 2025-06-01 23:20:05 | INFO  | Task 4ce45876-a06a-45f3-839e-105ee4f32298 (facts) was prepared for execution. 2025-06-01 23:20:05.157422 | orchestrator | 2025-06-01 23:20:05 | INFO  | It takes a moment until task 4ce45876-a06a-45f3-839e-105ee4f32298 (facts) has been started and output is visible here. 2025-06-01 23:20:09.485697 | orchestrator | 2025-06-01 23:20:09.487787 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-01 23:20:09.487825 | orchestrator | 2025-06-01 23:20:09.489006 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-01 23:20:09.490740 | orchestrator | Sunday 01 June 2025 23:20:09 +0000 (0:00:00.266) 0:00:00.266 *********** 2025-06-01 23:20:10.965909 | orchestrator | ok: [testbed-manager] 2025-06-01 23:20:10.966367 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:10.969344 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:20:10.969383 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:20:10.969395 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:20:10.970530 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:20:10.971208 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:20:10.972091 | orchestrator | 2025-06-01 23:20:10.972773 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-01 23:20:10.974178 | orchestrator | Sunday 01 June 2025 23:20:10 +0000 (0:00:01.478) 0:00:01.745 *********** 2025-06-01 23:20:11.149457 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:20:11.245450 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:11.332099 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:20:11.419547 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:20:11.495786 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:20:12.342338 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:20:12.342457 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:20:12.346273 | orchestrator | 2025-06-01 23:20:12.347003 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 23:20:12.348318 | orchestrator | 2025-06-01 23:20:12.349016 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 23:20:12.350210 | orchestrator | Sunday 01 June 2025 23:20:12 +0000 (0:00:01.378) 0:00:03.124 *********** 2025-06-01 23:20:17.745002 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:20:17.745817 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:20:17.745846 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:17.747340 | orchestrator | ok: [testbed-manager] 2025-06-01 23:20:17.750817 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:20:17.750843 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:20:17.750854 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:20:17.750867 | orchestrator | 2025-06-01 23:20:17.751074 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-01 23:20:17.751836 | orchestrator | 2025-06-01 23:20:17.752973 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-01 23:20:17.753385 | orchestrator | Sunday 01 June 2025 23:20:17 +0000 (0:00:05.404) 0:00:08.528 *********** 2025-06-01 23:20:17.927472 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:20:18.021597 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:18.111627 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:20:18.203007 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:20:18.302250 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:20:18.352030 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:20:18.353151 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:20:18.355715 | orchestrator | 2025-06-01 23:20:18.355791 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:20:18.356970 | orchestrator | 2025-06-01 23:20:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 23:20:18.357133 | orchestrator | 2025-06-01 23:20:18 | INFO  | Please wait and do not abort execution. 2025-06-01 23:20:18.358420 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:20:18.358813 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:20:18.359902 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:20:18.360654 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:20:18.361252 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:20:18.361623 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:20:18.362591 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:20:18.362904 | orchestrator | 2025-06-01 23:20:18.363983 | orchestrator | 2025-06-01 23:20:18.364384 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:20:18.365011 | orchestrator | Sunday 01 June 2025 23:20:18 +0000 (0:00:00.608) 0:00:09.136 *********** 2025-06-01 23:20:18.366153 | orchestrator | =============================================================================== 2025-06-01 23:20:18.366660 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.40s 2025-06-01 23:20:18.367100 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.48s 2025-06-01 23:20:18.367539 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.38s 2025-06-01 23:20:18.367963 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2025-06-01 23:20:19.261309 | orchestrator | + osism validate ceph-mons 2025-06-01 23:20:21.134765 | orchestrator | Registering Redlock._acquired_script 2025-06-01 23:20:21.134875 | orchestrator | Registering Redlock._extend_script 2025-06-01 23:20:21.134891 | orchestrator | Registering Redlock._release_script 2025-06-01 23:20:42.376535 | orchestrator | 2025-06-01 23:20:42.376698 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-01 23:20:42.376717 | orchestrator | 2025-06-01 23:20:42.376729 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-01 23:20:42.376758 | orchestrator | Sunday 01 June 2025 23:20:25 +0000 (0:00:00.462) 0:00:00.462 *********** 2025-06-01 23:20:42.376771 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:20:42.376782 | orchestrator | 2025-06-01 23:20:42.376793 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-01 23:20:42.376804 | orchestrator | Sunday 01 June 2025 23:20:26 +0000 (0:00:00.684) 0:00:01.146 *********** 2025-06-01 23:20:42.376815 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:20:42.376826 | orchestrator | 2025-06-01 23:20:42.376837 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-01 23:20:42.376848 | orchestrator | Sunday 01 June 2025 23:20:27 +0000 (0:00:00.954) 0:00:02.101 *********** 2025-06-01 23:20:42.376859 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.376870 | orchestrator | 2025-06-01 23:20:42.376881 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-01 23:20:42.376892 | orchestrator | Sunday 01 June 2025 23:20:27 +0000 (0:00:00.301) 0:00:02.403 *********** 2025-06-01 23:20:42.376903 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.376913 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:20:42.376955 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:20:42.376968 | orchestrator | 2025-06-01 23:20:42.376979 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-01 23:20:42.376990 | orchestrator | Sunday 01 June 2025 23:20:28 +0000 (0:00:00.327) 0:00:02.730 *********** 2025-06-01 23:20:42.377001 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:20:42.377011 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:20:42.377022 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.377033 | orchestrator | 2025-06-01 23:20:42.377044 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-01 23:20:42.377055 | orchestrator | Sunday 01 June 2025 23:20:29 +0000 (0:00:00.988) 0:00:03.718 *********** 2025-06-01 23:20:42.377066 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.377077 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:20:42.377088 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:20:42.377099 | orchestrator | 2025-06-01 23:20:42.377109 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-01 23:20:42.377120 | orchestrator | Sunday 01 June 2025 23:20:29 +0000 (0:00:00.290) 0:00:04.009 *********** 2025-06-01 23:20:42.377131 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.377142 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:20:42.377153 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:20:42.377163 | orchestrator | 2025-06-01 23:20:42.377174 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 23:20:42.377206 | orchestrator | Sunday 01 June 2025 23:20:30 +0000 (0:00:00.650) 0:00:04.659 *********** 2025-06-01 23:20:42.377218 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.377228 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:20:42.377239 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:20:42.377250 | orchestrator | 2025-06-01 23:20:42.377261 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-01 23:20:42.377276 | orchestrator | Sunday 01 June 2025 23:20:30 +0000 (0:00:00.322) 0:00:04.982 *********** 2025-06-01 23:20:42.377295 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.377316 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:20:42.377334 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:20:42.377351 | orchestrator | 2025-06-01 23:20:42.377371 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-01 23:20:42.377390 | orchestrator | Sunday 01 June 2025 23:20:30 +0000 (0:00:00.301) 0:00:05.284 *********** 2025-06-01 23:20:42.377410 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.377422 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:20:42.377432 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:20:42.377443 | orchestrator | 2025-06-01 23:20:42.377454 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-01 23:20:42.377465 | orchestrator | Sunday 01 June 2025 23:20:31 +0000 (0:00:00.326) 0:00:05.610 *********** 2025-06-01 23:20:42.377475 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.377486 | orchestrator | 2025-06-01 23:20:42.377497 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-01 23:20:42.377507 | orchestrator | Sunday 01 June 2025 23:20:31 +0000 (0:00:00.809) 0:00:06.419 *********** 2025-06-01 23:20:42.377518 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.377529 | orchestrator | 2025-06-01 23:20:42.377539 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-01 23:20:42.377550 | orchestrator | Sunday 01 June 2025 23:20:32 +0000 (0:00:00.280) 0:00:06.700 *********** 2025-06-01 23:20:42.377561 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.377571 | orchestrator | 2025-06-01 23:20:42.377582 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:20:42.377593 | orchestrator | Sunday 01 June 2025 23:20:32 +0000 (0:00:00.272) 0:00:06.972 *********** 2025-06-01 23:20:42.377603 | orchestrator | 2025-06-01 23:20:42.377614 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:20:42.377624 | orchestrator | Sunday 01 June 2025 23:20:32 +0000 (0:00:00.086) 0:00:07.059 *********** 2025-06-01 23:20:42.377635 | orchestrator | 2025-06-01 23:20:42.377645 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:20:42.377656 | orchestrator | Sunday 01 June 2025 23:20:32 +0000 (0:00:00.075) 0:00:07.135 *********** 2025-06-01 23:20:42.377667 | orchestrator | 2025-06-01 23:20:42.377678 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-01 23:20:42.377688 | orchestrator | Sunday 01 June 2025 23:20:32 +0000 (0:00:00.076) 0:00:07.211 *********** 2025-06-01 23:20:42.377699 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.377710 | orchestrator | 2025-06-01 23:20:42.377720 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-01 23:20:42.377731 | orchestrator | Sunday 01 June 2025 23:20:32 +0000 (0:00:00.309) 0:00:07.521 *********** 2025-06-01 23:20:42.377742 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.377752 | orchestrator | 2025-06-01 23:20:42.377783 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-01 23:20:42.377795 | orchestrator | Sunday 01 June 2025 23:20:33 +0000 (0:00:00.240) 0:00:07.762 *********** 2025-06-01 23:20:42.377812 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.377823 | orchestrator | 2025-06-01 23:20:42.377834 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-01 23:20:42.377845 | orchestrator | Sunday 01 June 2025 23:20:33 +0000 (0:00:00.145) 0:00:07.907 *********** 2025-06-01 23:20:42.377863 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:20:42.377874 | orchestrator | 2025-06-01 23:20:42.377885 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-01 23:20:42.377896 | orchestrator | Sunday 01 June 2025 23:20:35 +0000 (0:00:01.729) 0:00:09.636 *********** 2025-06-01 23:20:42.377906 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.377917 | orchestrator | 2025-06-01 23:20:42.377947 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-01 23:20:42.377958 | orchestrator | Sunday 01 June 2025 23:20:35 +0000 (0:00:00.348) 0:00:09.984 *********** 2025-06-01 23:20:42.377968 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.377979 | orchestrator | 2025-06-01 23:20:42.377990 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-01 23:20:42.378000 | orchestrator | Sunday 01 June 2025 23:20:35 +0000 (0:00:00.411) 0:00:10.395 *********** 2025-06-01 23:20:42.378011 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.378084 | orchestrator | 2025-06-01 23:20:42.378096 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-01 23:20:42.378106 | orchestrator | Sunday 01 June 2025 23:20:36 +0000 (0:00:00.366) 0:00:10.762 *********** 2025-06-01 23:20:42.378117 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.378128 | orchestrator | 2025-06-01 23:20:42.378138 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-01 23:20:42.378149 | orchestrator | Sunday 01 June 2025 23:20:36 +0000 (0:00:00.316) 0:00:11.078 *********** 2025-06-01 23:20:42.378159 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.378170 | orchestrator | 2025-06-01 23:20:42.378181 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-01 23:20:42.378191 | orchestrator | Sunday 01 June 2025 23:20:36 +0000 (0:00:00.120) 0:00:11.199 *********** 2025-06-01 23:20:42.378202 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.378213 | orchestrator | 2025-06-01 23:20:42.378223 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-01 23:20:42.378234 | orchestrator | Sunday 01 June 2025 23:20:36 +0000 (0:00:00.136) 0:00:11.336 *********** 2025-06-01 23:20:42.378245 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.378255 | orchestrator | 2025-06-01 23:20:42.378266 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-01 23:20:42.378277 | orchestrator | Sunday 01 June 2025 23:20:36 +0000 (0:00:00.122) 0:00:11.458 *********** 2025-06-01 23:20:42.378287 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:20:42.378298 | orchestrator | 2025-06-01 23:20:42.378309 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-01 23:20:42.378319 | orchestrator | Sunday 01 June 2025 23:20:38 +0000 (0:00:01.339) 0:00:12.797 *********** 2025-06-01 23:20:42.378330 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.378341 | orchestrator | 2025-06-01 23:20:42.378351 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-01 23:20:42.378362 | orchestrator | Sunday 01 June 2025 23:20:38 +0000 (0:00:00.345) 0:00:13.142 *********** 2025-06-01 23:20:42.378373 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.378383 | orchestrator | 2025-06-01 23:20:42.378394 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-01 23:20:42.378405 | orchestrator | Sunday 01 June 2025 23:20:38 +0000 (0:00:00.147) 0:00:13.290 *********** 2025-06-01 23:20:42.378416 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:20:42.378427 | orchestrator | 2025-06-01 23:20:42.378437 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-01 23:20:42.378448 | orchestrator | Sunday 01 June 2025 23:20:38 +0000 (0:00:00.148) 0:00:13.438 *********** 2025-06-01 23:20:42.378459 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.378469 | orchestrator | 2025-06-01 23:20:42.378480 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-01 23:20:42.378491 | orchestrator | Sunday 01 June 2025 23:20:39 +0000 (0:00:00.177) 0:00:13.616 *********** 2025-06-01 23:20:42.378509 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.378520 | orchestrator | 2025-06-01 23:20:42.378530 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-01 23:20:42.378541 | orchestrator | Sunday 01 June 2025 23:20:39 +0000 (0:00:00.397) 0:00:14.013 *********** 2025-06-01 23:20:42.378552 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:20:42.378562 | orchestrator | 2025-06-01 23:20:42.378573 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-01 23:20:42.378584 | orchestrator | Sunday 01 June 2025 23:20:39 +0000 (0:00:00.258) 0:00:14.271 *********** 2025-06-01 23:20:42.378594 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:20:42.378605 | orchestrator | 2025-06-01 23:20:42.378615 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-01 23:20:42.378626 | orchestrator | Sunday 01 June 2025 23:20:39 +0000 (0:00:00.250) 0:00:14.522 *********** 2025-06-01 23:20:42.378637 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:20:42.378648 | orchestrator | 2025-06-01 23:20:42.378659 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-01 23:20:42.378669 | orchestrator | Sunday 01 June 2025 23:20:41 +0000 (0:00:01.662) 0:00:16.185 *********** 2025-06-01 23:20:42.378680 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:20:42.378690 | orchestrator | 2025-06-01 23:20:42.378701 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-01 23:20:42.378712 | orchestrator | Sunday 01 June 2025 23:20:41 +0000 (0:00:00.279) 0:00:16.464 *********** 2025-06-01 23:20:42.378722 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:20:42.378738 | orchestrator | 2025-06-01 23:20:42.378757 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:20:45.216600 | orchestrator | Sunday 01 June 2025 23:20:42 +0000 (0:00:00.280) 0:00:16.745 *********** 2025-06-01 23:20:45.216705 | orchestrator | 2025-06-01 23:20:45.216720 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:20:45.216733 | orchestrator | Sunday 01 June 2025 23:20:42 +0000 (0:00:00.073) 0:00:16.818 *********** 2025-06-01 23:20:45.216744 | orchestrator | 2025-06-01 23:20:45.216777 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:20:45.216789 | orchestrator | Sunday 01 June 2025 23:20:42 +0000 (0:00:00.073) 0:00:16.891 *********** 2025-06-01 23:20:45.216800 | orchestrator | 2025-06-01 23:20:45.216812 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-01 23:20:45.216823 | orchestrator | Sunday 01 June 2025 23:20:42 +0000 (0:00:00.076) 0:00:16.967 *********** 2025-06-01 23:20:45.216834 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:20:45.216845 | orchestrator | 2025-06-01 23:20:45.216856 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-01 23:20:45.216867 | orchestrator | Sunday 01 June 2025 23:20:44 +0000 (0:00:01.727) 0:00:18.695 *********** 2025-06-01 23:20:45.216878 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-01 23:20:45.216890 | orchestrator |  "msg": [ 2025-06-01 23:20:45.216902 | orchestrator |  "Validator run completed.", 2025-06-01 23:20:45.216914 | orchestrator |  "You can find the report file here:", 2025-06-01 23:20:45.216989 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-01T23:20:26+00:00-report.json", 2025-06-01 23:20:45.217002 | orchestrator |  "on the following host:", 2025-06-01 23:20:45.217013 | orchestrator |  "testbed-manager" 2025-06-01 23:20:45.217024 | orchestrator |  ] 2025-06-01 23:20:45.217036 | orchestrator | } 2025-06-01 23:20:45.217047 | orchestrator | 2025-06-01 23:20:45.217058 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:20:45.217071 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-01 23:20:45.217104 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:20:45.217116 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:20:45.217128 | orchestrator | 2025-06-01 23:20:45.217139 | orchestrator | 2025-06-01 23:20:45.217149 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:20:45.217161 | orchestrator | Sunday 01 June 2025 23:20:44 +0000 (0:00:00.701) 0:00:19.396 *********** 2025-06-01 23:20:45.217171 | orchestrator | =============================================================================== 2025-06-01 23:20:45.217182 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.73s 2025-06-01 23:20:45.217193 | orchestrator | Write report file ------------------------------------------------------- 1.73s 2025-06-01 23:20:45.217204 | orchestrator | Aggregate test results step one ----------------------------------------- 1.66s 2025-06-01 23:20:45.217215 | orchestrator | Gather status data ------------------------------------------------------ 1.34s 2025-06-01 23:20:45.217226 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2025-06-01 23:20:45.217237 | orchestrator | Create report output directory ------------------------------------------ 0.95s 2025-06-01 23:20:45.217248 | orchestrator | Aggregate test results step one ----------------------------------------- 0.81s 2025-06-01 23:20:45.217259 | orchestrator | Print report file information ------------------------------------------- 0.70s 2025-06-01 23:20:45.217270 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-06-01 23:20:45.217281 | orchestrator | Set test result to passed if container is existing ---------------------- 0.65s 2025-06-01 23:20:45.217291 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.41s 2025-06-01 23:20:45.217302 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.40s 2025-06-01 23:20:45.217314 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.37s 2025-06-01 23:20:45.217325 | orchestrator | Set quorum test data ---------------------------------------------------- 0.35s 2025-06-01 23:20:45.217336 | orchestrator | Set health test data ---------------------------------------------------- 0.35s 2025-06-01 23:20:45.217347 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2025-06-01 23:20:45.217357 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.33s 2025-06-01 23:20:45.217368 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-06-01 23:20:45.217379 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2025-06-01 23:20:45.217390 | orchestrator | Print report file information ------------------------------------------- 0.31s 2025-06-01 23:20:45.560790 | orchestrator | + osism validate ceph-mgrs 2025-06-01 23:20:47.400308 | orchestrator | Registering Redlock._acquired_script 2025-06-01 23:20:47.400414 | orchestrator | Registering Redlock._extend_script 2025-06-01 23:20:47.400428 | orchestrator | Registering Redlock._release_script 2025-06-01 23:21:07.814103 | orchestrator | 2025-06-01 23:21:07.814254 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-01 23:21:07.814272 | orchestrator | 2025-06-01 23:21:07.814284 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-01 23:21:07.814296 | orchestrator | Sunday 01 June 2025 23:20:52 +0000 (0:00:00.474) 0:00:00.474 *********** 2025-06-01 23:21:07.814309 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:07.814320 | orchestrator | 2025-06-01 23:21:07.814331 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-01 23:21:07.814342 | orchestrator | Sunday 01 June 2025 23:20:52 +0000 (0:00:00.644) 0:00:01.119 *********** 2025-06-01 23:21:07.814373 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:07.814385 | orchestrator | 2025-06-01 23:21:07.814396 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-01 23:21:07.814432 | orchestrator | Sunday 01 June 2025 23:20:53 +0000 (0:00:00.935) 0:00:02.055 *********** 2025-06-01 23:21:07.814443 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:21:07.814456 | orchestrator | 2025-06-01 23:21:07.814467 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-01 23:21:07.814478 | orchestrator | Sunday 01 June 2025 23:20:53 +0000 (0:00:00.317) 0:00:02.372 *********** 2025-06-01 23:21:07.814492 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:21:07.814505 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:21:07.814517 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:21:07.814530 | orchestrator | 2025-06-01 23:21:07.814542 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-01 23:21:07.814555 | orchestrator | Sunday 01 June 2025 23:20:54 +0000 (0:00:00.326) 0:00:02.699 *********** 2025-06-01 23:21:07.814568 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:21:07.814580 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:21:07.814591 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:21:07.814602 | orchestrator | 2025-06-01 23:21:07.814613 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-01 23:21:07.814624 | orchestrator | Sunday 01 June 2025 23:20:55 +0000 (0:00:01.045) 0:00:03.745 *********** 2025-06-01 23:21:07.814635 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:21:07.814646 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:21:07.814657 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:21:07.814667 | orchestrator | 2025-06-01 23:21:07.814678 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-01 23:21:07.814736 | orchestrator | Sunday 01 June 2025 23:20:55 +0000 (0:00:00.312) 0:00:04.057 *********** 2025-06-01 23:21:07.814751 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:21:07.814762 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:21:07.814773 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:21:07.814783 | orchestrator | 2025-06-01 23:21:07.814794 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 23:21:07.814806 | orchestrator | Sunday 01 June 2025 23:20:56 +0000 (0:00:00.609) 0:00:04.667 *********** 2025-06-01 23:21:07.814817 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:21:07.814827 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:21:07.814838 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:21:07.814849 | orchestrator | 2025-06-01 23:21:07.814860 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-01 23:21:07.814871 | orchestrator | Sunday 01 June 2025 23:20:56 +0000 (0:00:00.337) 0:00:05.004 *********** 2025-06-01 23:21:07.814881 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:21:07.814892 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:21:07.814903 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:21:07.814914 | orchestrator | 2025-06-01 23:21:07.814924 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-01 23:21:07.814959 | orchestrator | Sunday 01 June 2025 23:20:56 +0000 (0:00:00.317) 0:00:05.321 *********** 2025-06-01 23:21:07.814970 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:21:07.814981 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:21:07.814992 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:21:07.815003 | orchestrator | 2025-06-01 23:21:07.815014 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-01 23:21:07.815025 | orchestrator | Sunday 01 June 2025 23:20:57 +0000 (0:00:00.314) 0:00:05.636 *********** 2025-06-01 23:21:07.815035 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:21:07.815046 | orchestrator | 2025-06-01 23:21:07.815057 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-01 23:21:07.815068 | orchestrator | Sunday 01 June 2025 23:20:58 +0000 (0:00:00.837) 0:00:06.473 *********** 2025-06-01 23:21:07.815079 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:21:07.815090 | orchestrator | 2025-06-01 23:21:07.815101 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-01 23:21:07.815122 | orchestrator | Sunday 01 June 2025 23:20:58 +0000 (0:00:00.244) 0:00:06.718 *********** 2025-06-01 23:21:07.815133 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:21:07.815144 | orchestrator | 2025-06-01 23:21:07.815155 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:21:07.815166 | orchestrator | Sunday 01 June 2025 23:20:58 +0000 (0:00:00.254) 0:00:06.973 *********** 2025-06-01 23:21:07.815176 | orchestrator | 2025-06-01 23:21:07.815188 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:21:07.815199 | orchestrator | Sunday 01 June 2025 23:20:58 +0000 (0:00:00.079) 0:00:07.052 *********** 2025-06-01 23:21:07.815209 | orchestrator | 2025-06-01 23:21:07.815220 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:21:07.815231 | orchestrator | Sunday 01 June 2025 23:20:58 +0000 (0:00:00.073) 0:00:07.126 *********** 2025-06-01 23:21:07.815242 | orchestrator | 2025-06-01 23:21:07.815252 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-01 23:21:07.815263 | orchestrator | Sunday 01 June 2025 23:20:58 +0000 (0:00:00.075) 0:00:07.201 *********** 2025-06-01 23:21:07.815274 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:21:07.815284 | orchestrator | 2025-06-01 23:21:07.815295 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-01 23:21:07.815306 | orchestrator | Sunday 01 June 2025 23:20:59 +0000 (0:00:00.263) 0:00:07.465 *********** 2025-06-01 23:21:07.815317 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:21:07.815328 | orchestrator | 2025-06-01 23:21:07.815361 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-01 23:21:07.815372 | orchestrator | Sunday 01 June 2025 23:20:59 +0000 (0:00:00.260) 0:00:07.726 *********** 2025-06-01 23:21:07.815383 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:21:07.815394 | orchestrator | 2025-06-01 23:21:07.815405 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-01 23:21:07.815416 | orchestrator | Sunday 01 June 2025 23:20:59 +0000 (0:00:00.116) 0:00:07.843 *********** 2025-06-01 23:21:07.815427 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:21:07.815438 | orchestrator | 2025-06-01 23:21:07.815449 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-01 23:21:07.815460 | orchestrator | Sunday 01 June 2025 23:21:01 +0000 (0:00:01.838) 0:00:09.681 *********** 2025-06-01 23:21:07.815471 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:21:07.815481 | orchestrator | 2025-06-01 23:21:07.815492 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-01 23:21:07.815504 | orchestrator | Sunday 01 June 2025 23:21:01 +0000 (0:00:00.297) 0:00:09.979 *********** 2025-06-01 23:21:07.815515 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:21:07.815525 | orchestrator | 2025-06-01 23:21:07.815536 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-01 23:21:07.815547 | orchestrator | Sunday 01 June 2025 23:21:02 +0000 (0:00:00.955) 0:00:10.934 *********** 2025-06-01 23:21:07.815558 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:21:07.815569 | orchestrator | 2025-06-01 23:21:07.815580 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-01 23:21:07.815591 | orchestrator | Sunday 01 June 2025 23:21:02 +0000 (0:00:00.164) 0:00:11.099 *********** 2025-06-01 23:21:07.815602 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:21:07.815613 | orchestrator | 2025-06-01 23:21:07.815624 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-01 23:21:07.815635 | orchestrator | Sunday 01 June 2025 23:21:02 +0000 (0:00:00.198) 0:00:11.297 *********** 2025-06-01 23:21:07.815646 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:07.815656 | orchestrator | 2025-06-01 23:21:07.815667 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-01 23:21:07.815678 | orchestrator | Sunday 01 June 2025 23:21:03 +0000 (0:00:00.246) 0:00:11.543 *********** 2025-06-01 23:21:07.815689 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:21:07.815707 | orchestrator | 2025-06-01 23:21:07.815718 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-01 23:21:07.815729 | orchestrator | Sunday 01 June 2025 23:21:03 +0000 (0:00:00.229) 0:00:11.773 *********** 2025-06-01 23:21:07.815739 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:07.815750 | orchestrator | 2025-06-01 23:21:07.815761 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-01 23:21:07.815772 | orchestrator | Sunday 01 June 2025 23:21:04 +0000 (0:00:01.281) 0:00:13.054 *********** 2025-06-01 23:21:07.815783 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:07.815794 | orchestrator | 2025-06-01 23:21:07.815805 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-01 23:21:07.815816 | orchestrator | Sunday 01 June 2025 23:21:04 +0000 (0:00:00.250) 0:00:13.305 *********** 2025-06-01 23:21:07.815827 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:07.815837 | orchestrator | 2025-06-01 23:21:07.815848 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:21:07.815859 | orchestrator | Sunday 01 June 2025 23:21:05 +0000 (0:00:00.236) 0:00:13.541 *********** 2025-06-01 23:21:07.815870 | orchestrator | 2025-06-01 23:21:07.815881 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:21:07.815892 | orchestrator | Sunday 01 June 2025 23:21:05 +0000 (0:00:00.070) 0:00:13.611 *********** 2025-06-01 23:21:07.815902 | orchestrator | 2025-06-01 23:21:07.815913 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:21:07.815924 | orchestrator | Sunday 01 June 2025 23:21:05 +0000 (0:00:00.073) 0:00:13.685 *********** 2025-06-01 23:21:07.815951 | orchestrator | 2025-06-01 23:21:07.815962 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-01 23:21:07.815973 | orchestrator | Sunday 01 June 2025 23:21:05 +0000 (0:00:00.094) 0:00:13.780 *********** 2025-06-01 23:21:07.815984 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:07.815994 | orchestrator | 2025-06-01 23:21:07.816005 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-01 23:21:07.816016 | orchestrator | Sunday 01 June 2025 23:21:07 +0000 (0:00:01.992) 0:00:15.772 *********** 2025-06-01 23:21:07.816027 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-01 23:21:07.816038 | orchestrator |  "msg": [ 2025-06-01 23:21:07.816050 | orchestrator |  "Validator run completed.", 2025-06-01 23:21:07.816061 | orchestrator |  "You can find the report file here:", 2025-06-01 23:21:07.816072 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-01T23:20:52+00:00-report.json", 2025-06-01 23:21:07.816085 | orchestrator |  "on the following host:", 2025-06-01 23:21:07.816096 | orchestrator |  "testbed-manager" 2025-06-01 23:21:07.816106 | orchestrator |  ] 2025-06-01 23:21:07.816118 | orchestrator | } 2025-06-01 23:21:07.816129 | orchestrator | 2025-06-01 23:21:07.816140 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:21:07.816153 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-01 23:21:07.816176 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:21:07.816194 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:21:08.198290 | orchestrator | 2025-06-01 23:21:08.198417 | orchestrator | 2025-06-01 23:21:08.198431 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:21:08.198445 | orchestrator | Sunday 01 June 2025 23:21:07 +0000 (0:00:00.450) 0:00:16.223 *********** 2025-06-01 23:21:08.198456 | orchestrator | =============================================================================== 2025-06-01 23:21:08.198497 | orchestrator | Write report file ------------------------------------------------------- 1.99s 2025-06-01 23:21:08.198508 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.84s 2025-06-01 23:21:08.198537 | orchestrator | Aggregate test results step one ----------------------------------------- 1.28s 2025-06-01 23:21:08.198548 | orchestrator | Get container info ------------------------------------------------------ 1.05s 2025-06-01 23:21:08.198559 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.96s 2025-06-01 23:21:08.198569 | orchestrator | Create report output directory ------------------------------------------ 0.94s 2025-06-01 23:21:08.198580 | orchestrator | Aggregate test results step one ----------------------------------------- 0.84s 2025-06-01 23:21:08.198591 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-06-01 23:21:08.198601 | orchestrator | Set test result to passed if container is existing ---------------------- 0.61s 2025-06-01 23:21:08.198612 | orchestrator | Print report file information ------------------------------------------- 0.45s 2025-06-01 23:21:08.198622 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2025-06-01 23:21:08.198633 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2025-06-01 23:21:08.198644 | orchestrator | Define report vars ------------------------------------------------------ 0.32s 2025-06-01 23:21:08.198654 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.32s 2025-06-01 23:21:08.198665 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2025-06-01 23:21:08.198675 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-06-01 23:21:08.198686 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.30s 2025-06-01 23:21:08.198697 | orchestrator | Print report file information ------------------------------------------- 0.26s 2025-06-01 23:21:08.198707 | orchestrator | Fail due to missing containers ------------------------------------------ 0.26s 2025-06-01 23:21:08.198718 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2025-06-01 23:21:08.534682 | orchestrator | + osism validate ceph-osds 2025-06-01 23:21:10.383611 | orchestrator | Registering Redlock._acquired_script 2025-06-01 23:21:10.383739 | orchestrator | Registering Redlock._extend_script 2025-06-01 23:21:10.383756 | orchestrator | Registering Redlock._release_script 2025-06-01 23:21:20.038080 | orchestrator | 2025-06-01 23:21:20.038220 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-01 23:21:20.038237 | orchestrator | 2025-06-01 23:21:20.038249 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-01 23:21:20.038261 | orchestrator | Sunday 01 June 2025 23:21:15 +0000 (0:00:00.458) 0:00:00.458 *********** 2025-06-01 23:21:20.038273 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:20.038284 | orchestrator | 2025-06-01 23:21:20.038295 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 23:21:20.038306 | orchestrator | Sunday 01 June 2025 23:21:15 +0000 (0:00:00.672) 0:00:01.130 *********** 2025-06-01 23:21:20.038317 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:20.038328 | orchestrator | 2025-06-01 23:21:20.038338 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-01 23:21:20.038349 | orchestrator | Sunday 01 June 2025 23:21:16 +0000 (0:00:00.443) 0:00:01.574 *********** 2025-06-01 23:21:20.038360 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:20.038370 | orchestrator | 2025-06-01 23:21:20.038381 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-01 23:21:20.038392 | orchestrator | Sunday 01 June 2025 23:21:17 +0000 (0:00:01.086) 0:00:02.661 *********** 2025-06-01 23:21:20.038403 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:20.038415 | orchestrator | 2025-06-01 23:21:20.038426 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-01 23:21:20.038464 | orchestrator | Sunday 01 June 2025 23:21:17 +0000 (0:00:00.136) 0:00:02.798 *********** 2025-06-01 23:21:20.038477 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:20.038491 | orchestrator | 2025-06-01 23:21:20.038504 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-01 23:21:20.038516 | orchestrator | Sunday 01 June 2025 23:21:17 +0000 (0:00:00.137) 0:00:02.935 *********** 2025-06-01 23:21:20.038528 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:20.038541 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:21:20.038555 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:21:20.038567 | orchestrator | 2025-06-01 23:21:20.038579 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-01 23:21:20.038592 | orchestrator | Sunday 01 June 2025 23:21:17 +0000 (0:00:00.307) 0:00:03.243 *********** 2025-06-01 23:21:20.038604 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:20.038616 | orchestrator | 2025-06-01 23:21:20.038629 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-01 23:21:20.038642 | orchestrator | Sunday 01 June 2025 23:21:18 +0000 (0:00:00.169) 0:00:03.413 *********** 2025-06-01 23:21:20.038655 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:20.038667 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:20.038680 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:20.038692 | orchestrator | 2025-06-01 23:21:20.038704 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-01 23:21:20.038717 | orchestrator | Sunday 01 June 2025 23:21:18 +0000 (0:00:00.324) 0:00:03.738 *********** 2025-06-01 23:21:20.038729 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:20.038741 | orchestrator | 2025-06-01 23:21:20.038753 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 23:21:20.038766 | orchestrator | Sunday 01 June 2025 23:21:19 +0000 (0:00:00.629) 0:00:04.368 *********** 2025-06-01 23:21:20.038778 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:20.038791 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:20.038803 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:20.038816 | orchestrator | 2025-06-01 23:21:20.038828 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-01 23:21:20.038838 | orchestrator | Sunday 01 June 2025 23:21:19 +0000 (0:00:00.606) 0:00:04.975 *********** 2025-06-01 23:21:20.038867 | orchestrator | skipping: [testbed-node-3] => (item={'id': '87d7dafd7041706ce8e817cd80498c90bd53410b0e31d87edcb0166c4fcdbbf1', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-01 23:21:20.038883 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ef0202c074a8b7107ef673df8a6bba5aac6c6a3284448598b52ce2905cb5d9a7', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-01 23:21:20.038894 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b352c38f46fea458ed37724a0ce4da3fd64828572f0f7ff1275bcc521b19a988', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-01 23:21:20.038908 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4a51b7ac38c1019e7a0071bcf99d7f31094c9731402b2332bcf2ac398a69154b', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-01 23:21:20.038920 | orchestrator | skipping: [testbed-node-3] => (item={'id': '949dbace75e0963921cdd09db4a7d25daac1b9cc9d7e0e12953ca29e23f86a24', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-06-01 23:21:20.038976 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd52a5b8b0babe34f7b65d7e21f9f11700766bc66e9ac75ae2c8d4d86215e6896', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-06-01 23:21:20.038998 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd6836a5ff53b5098af1bef6a4f449c8285b9a0b2f00ce5d355d2521c70e9e704', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-06-01 23:21:20.039022 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6f9000933ad37203c710e9a7ee04bd0f05fe0dab621f1845bce78b5233b083aa', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-06-01 23:21:20.039034 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c2ec95bf201c9f5fe62b125fa37083c71ce17f0affa0e8c0cd13b5e0d20723ec', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2025-06-01 23:21:20.039045 | orchestrator | skipping: [testbed-node-3] => (item={'id': '85d6ec88f46c989ef819e3b6e0cab25150d6fa35101dffa64f0246b92101b5bf', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-01 23:21:20.039057 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2af15b9696c78ee7a8acdbcfecb637c3051beb6e56081eaf06093aa456c10fce', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-01 23:21:20.039068 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e68536c10143ded7c7f95d2e8c46f14e47748df78f19568577e2d29e673114bf', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 25 minutes'})  2025-06-01 23:21:20.039079 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b0090b018dd8d85b71b9feb9c984d079d6719fb9f3e46dfbb7f7da54ed368123', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-01 23:21:20.039090 | orchestrator | skipping: [testbed-node-4] => (item={'id': '82c6c44b042c772a7de7245b40d5ac4e70bf29257973720e9e8fbf5b2e3d1db1', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-01 23:21:20.039102 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'edd90f3c9a8c4cf269750db2edb3b2cd5fe75d80c9e51925449a1703da3d3322', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-01 23:21:20.039113 | orchestrator | skipping: [testbed-node-4] => (item={'id': '10d0faa9f788ad6a8c4ca0f997ae671db925dda556c056f1fdf1e8655c635ac1', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-01 23:21:20.039126 | orchestrator | ok: [testbed-node-3] => (item={'id': 'e96cc8607d3b6b53a9b01cc5418aab5ef1a22686ebf805597aa39aff873a5ff9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-06-01 23:21:20.039138 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2d7313ddd8b1a762d12567a77908198be00c09a376331a7c3323a7183a0c3a76', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-06-01 23:21:20.039149 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c2e712fdb3d0031e47fe850d40192987dd06bc8f4e14bdb09c28a29c8570cc14', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-06-01 23:21:20.039168 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dc1c711a27d81a92627c418678b85db0f27715530ad483e53ad86ca04ad21b51', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-06-01 23:21:20.039187 | orchestrator | ok: [testbed-node-3] => (item={'id': 'fd3716fcec976ab1bff3e1d06d426ef79b16ed0fc06e6168db7c170f2e1ab75f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-06-01 23:21:20.193844 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b4224c71899f864c0167ad74bde5e3ec17fbd9e2fd1d9f1f4955525888f19e3e', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-06-01 23:21:20.193980 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aa8074be178c627e43af66ed4798faa3c5470a40ea5375b54acd0202fb45cf18', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-01 23:21:20.193996 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cd6cb323ae8aab43c4de168a9e804ac497f397975795a0146557f431547d0a42', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2025-06-01 23:21:20.194010 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2f07d7c02ab9ba5589eab368fcd119a984fa3539c2f4eb2a011972b367ce2674', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-01 23:21:20.194066 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ec19bc9ed2bb982be49c43520e817a2e131173e387c35a3f14c3858206dcea74', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-01 23:21:20.194078 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8ff6257aa391e11a15a63e3abed7a380b9ec542d2b3166d0166443a850d277c0', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-01 23:21:20.194090 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fc473110bb0a8845e28ea23254794686118dddad07b030f22fdb7c04e5a951e6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-01 23:21:20.194102 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bbef8a822235703967b53a985e85c95cf4751549ff626b517a3e418b36831e84', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-01 23:21:20.194139 | orchestrator | skipping: [testbed-node-4] => (item={'id': '66cedbcfdb4fdd39309c5441a18574db2366e815d08ba13bb8c0ea801cfd2ace', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 25 minutes'})  2025-06-01 23:21:20.194151 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bc4beae3e778070c79425492457b9c7db95fe37b89fadf0ba2b4bca0fda75e53', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-01 23:21:20.194163 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1f522747abaef5d52ea7b02935f3cd7e0ab70670d52c45cba1d202b038f60c45', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-01 23:21:20.194174 | orchestrator | ok: [testbed-node-4] => (item={'id': '6e64b084909a3db5a6543426023ad9576f82f958595accdcaf8766475940aca9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-06-01 23:21:20.194211 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e96a82d9cc0d8be9694e7b32ea4136817325478e1ee5bbcb29d2a39dd6901bfc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-06-01 23:21:20.194223 | orchestrator | skipping: [testbed-node-4] => (item={'id': '97ee1c56e195fae3f705d8ae283934bb6966cdf04a34f4fea2216bb9243e2b2c', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-01 23:21:20.194234 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a1f74c4d48042af56b5eb38c7bc88939b60d3b1172dc28893375f6618e2171da', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-01 23:21:20.194263 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6f8016ea3c6cd1ee06f6d2d7a928cfb73df8010a54f88b1ef7b7c93133eb7fec', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-01 23:21:20.194275 | orchestrator | skipping: [testbed-node-4] => (item={'id': '00a98280971b2c25cac46fb4b6f7da60a1bfb31bd49c5b22682cd2b630032b81', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-01 23:21:20.194287 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a1bb3600443168b47ef096cf1216913e63ae74f4548cf1c9a9b6271a92549dc4', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-01 23:21:20.194298 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7a026d98225a79e3533ed9a865d44d1fff2e082e64b676340a7df6d5ee9497dd', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-01 23:21:20.194309 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0b2441ddee68b11e7b77800680ed61e6d492bb55a7321ecaa18d34129d7be1ea', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-01 23:21:20.194320 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8bf39126871d191e689a54c60b8472aada63b86dece9cf59c1e919a166f3a798', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-01 23:21:20.194331 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8f5dfc0b8ca9b8087f87f49f657069f16f3b7a7891c75cbe0bee9e1c20467160', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-01 23:21:20.194342 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5dd27ec523df170761651b852ac77033b1812311122d1d0a7d729067aba80e9b', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-01 23:21:20.194358 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd29243911c6ab4e2f9c056a60cfb70fb12fe636b4b0a821df495d4aa6f369338', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-06-01 23:21:20.194369 | orchestrator | skipping: [testbed-node-5] => (item={'id': '63336e33d551e16bd1f1d11d88eee3a37cf22cc6ebe67edeb9612bfcc6fc5a38', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-06-01 23:21:20.194391 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ad3d32861568aedb5da613f0c5d70991291afea2a4ec2b61e392173c9101e227', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-06-01 23:21:20.194406 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ae452ab1d3838fdec1efd65b8d6ff357ad3dde287d3e1dd1e1f4be5e0f2af36a', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-06-01 23:21:20.194418 | orchestrator | skipping: [testbed-node-5] => (item={'id': '12f88e9cfbb1b0fe0e28e14ab4152c45bccec8e0386e9d138dce1a21c201fb3b', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2025-06-01 23:21:20.194430 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6622ea286f20a12b9e8b364fc09eb29643c6d7653becc5deb49998c0725a31a4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-01 23:21:20.194449 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0fefda5320283691f6ea63b483feec7000ec04d3322906529bf3e37957d079ce', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-01 23:21:29.033668 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ea8a7ff5e03a2a73e0f3669ecd3391c8781471a410bce24c8c22479e31ec4404', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 25 minutes'})  2025-06-01 23:21:29.033789 | orchestrator | ok: [testbed-node-5] => (item={'id': '411eb5cb77dcbb5b385851998b6061233e1d7c814fb0d5ebe5824fce217803d6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-06-01 23:21:29.033807 | orchestrator | ok: [testbed-node-5] => (item={'id': '48e9401186121933f55126b9c75240c947f7bf88905db771b62bdc9acc48ce21', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-06-01 23:21:29.033820 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9478bffc4acefaafc1b03b8dc7942a2c1296bfff19a22f3fe5420ce5589f837f', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-01 23:21:29.033833 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd4a558ecdd453ad4f240ad8ae1be9623159b4edf33dee5a6e306c1b4d808d9ca', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-01 23:21:29.033846 | orchestrator | skipping: [testbed-node-5] => (item={'id': '43fe6f8eb516c26be5468b730c6c9410f458686501618a51ad91b877aa1318f7', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-01 23:21:29.033858 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7e7b6ef6e79f127755d501c4680e6f9b508bcc50c0bcd3ba8664deafa25cafee', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-01 23:21:29.033869 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8fc50884fa778cedc2a7ceb8199234278968a81addd91f1ea647cce82bfbc73b', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-01 23:21:29.033897 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b6169dd4e58271cf855160e6ddbb2f7432dacf65f81677a6de6577986209c23e', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-01 23:21:29.033976 | orchestrator | 2025-06-01 23:21:29.033992 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-01 23:21:29.034005 | orchestrator | Sunday 01 June 2025 23:21:20 +0000 (0:00:00.453) 0:00:05.428 *********** 2025-06-01 23:21:29.034076 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:29.034091 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:29.034101 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:29.034112 | orchestrator | 2025-06-01 23:21:29.034123 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-01 23:21:29.034143 | orchestrator | Sunday 01 June 2025 23:21:20 +0000 (0:00:00.336) 0:00:05.765 *********** 2025-06-01 23:21:29.034154 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:29.034168 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:21:29.034185 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:21:29.034203 | orchestrator | 2025-06-01 23:21:29.034216 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-01 23:21:29.034229 | orchestrator | Sunday 01 June 2025 23:21:21 +0000 (0:00:00.598) 0:00:06.364 *********** 2025-06-01 23:21:29.034241 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:29.034254 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:29.034267 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:29.034280 | orchestrator | 2025-06-01 23:21:29.034292 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 23:21:29.034305 | orchestrator | Sunday 01 June 2025 23:21:21 +0000 (0:00:00.321) 0:00:06.686 *********** 2025-06-01 23:21:29.034318 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:29.034330 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:29.034341 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:29.034352 | orchestrator | 2025-06-01 23:21:29.034363 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-01 23:21:29.034375 | orchestrator | Sunday 01 June 2025 23:21:21 +0000 (0:00:00.347) 0:00:07.033 *********** 2025-06-01 23:21:29.034386 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-01 23:21:29.034398 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-01 23:21:29.034409 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:29.034420 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-01 23:21:29.034431 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-01 23:21:29.034460 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:21:29.034472 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-01 23:21:29.034483 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-01 23:21:29.034494 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:21:29.034505 | orchestrator | 2025-06-01 23:21:29.034515 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-01 23:21:29.034526 | orchestrator | Sunday 01 June 2025 23:21:22 +0000 (0:00:00.347) 0:00:07.381 *********** 2025-06-01 23:21:29.034537 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:29.034548 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:29.034559 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:29.034570 | orchestrator | 2025-06-01 23:21:29.034581 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-01 23:21:29.034591 | orchestrator | Sunday 01 June 2025 23:21:22 +0000 (0:00:00.573) 0:00:07.954 *********** 2025-06-01 23:21:29.034602 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:29.034613 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:21:29.034624 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:21:29.034645 | orchestrator | 2025-06-01 23:21:29.034656 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-01 23:21:29.034667 | orchestrator | Sunday 01 June 2025 23:21:23 +0000 (0:00:00.335) 0:00:08.290 *********** 2025-06-01 23:21:29.034678 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:29.034688 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:21:29.034699 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:21:29.034710 | orchestrator | 2025-06-01 23:21:29.034721 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-01 23:21:29.034731 | orchestrator | Sunday 01 June 2025 23:21:23 +0000 (0:00:00.328) 0:00:08.619 *********** 2025-06-01 23:21:29.034743 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:29.034754 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:29.034765 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:29.034775 | orchestrator | 2025-06-01 23:21:29.034787 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-01 23:21:29.034798 | orchestrator | Sunday 01 June 2025 23:21:23 +0000 (0:00:00.324) 0:00:08.944 *********** 2025-06-01 23:21:29.034808 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:29.034819 | orchestrator | 2025-06-01 23:21:29.034830 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-01 23:21:29.034841 | orchestrator | Sunday 01 June 2025 23:21:24 +0000 (0:00:00.770) 0:00:09.714 *********** 2025-06-01 23:21:29.034852 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:29.034863 | orchestrator | 2025-06-01 23:21:29.034873 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-01 23:21:29.034884 | orchestrator | Sunday 01 June 2025 23:21:24 +0000 (0:00:00.246) 0:00:09.961 *********** 2025-06-01 23:21:29.034895 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:29.034906 | orchestrator | 2025-06-01 23:21:29.034916 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:21:29.034927 | orchestrator | Sunday 01 June 2025 23:21:24 +0000 (0:00:00.231) 0:00:10.192 *********** 2025-06-01 23:21:29.034960 | orchestrator | 2025-06-01 23:21:29.034971 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:21:29.034983 | orchestrator | Sunday 01 June 2025 23:21:25 +0000 (0:00:00.067) 0:00:10.260 *********** 2025-06-01 23:21:29.034994 | orchestrator | 2025-06-01 23:21:29.035004 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:21:29.035015 | orchestrator | Sunday 01 June 2025 23:21:25 +0000 (0:00:00.070) 0:00:10.330 *********** 2025-06-01 23:21:29.035026 | orchestrator | 2025-06-01 23:21:29.035037 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-01 23:21:29.035048 | orchestrator | Sunday 01 June 2025 23:21:25 +0000 (0:00:00.072) 0:00:10.403 *********** 2025-06-01 23:21:29.035059 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:29.035069 | orchestrator | 2025-06-01 23:21:29.035080 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-01 23:21:29.035091 | orchestrator | Sunday 01 June 2025 23:21:25 +0000 (0:00:00.250) 0:00:10.653 *********** 2025-06-01 23:21:29.035102 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:29.035113 | orchestrator | 2025-06-01 23:21:29.035124 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 23:21:29.035135 | orchestrator | Sunday 01 June 2025 23:21:25 +0000 (0:00:00.242) 0:00:10.895 *********** 2025-06-01 23:21:29.035145 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:29.035156 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:29.035167 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:29.035178 | orchestrator | 2025-06-01 23:21:29.035189 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-01 23:21:29.035200 | orchestrator | Sunday 01 June 2025 23:21:25 +0000 (0:00:00.332) 0:00:11.228 *********** 2025-06-01 23:21:29.035211 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:29.035222 | orchestrator | 2025-06-01 23:21:29.035233 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-01 23:21:29.035251 | orchestrator | Sunday 01 June 2025 23:21:26 +0000 (0:00:00.840) 0:00:12.068 *********** 2025-06-01 23:21:29.035262 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 23:21:29.035273 | orchestrator | 2025-06-01 23:21:29.035284 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-01 23:21:29.035295 | orchestrator | Sunday 01 June 2025 23:21:28 +0000 (0:00:01.610) 0:00:13.679 *********** 2025-06-01 23:21:29.035306 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:29.035317 | orchestrator | 2025-06-01 23:21:29.035328 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-01 23:21:29.035338 | orchestrator | Sunday 01 June 2025 23:21:28 +0000 (0:00:00.134) 0:00:13.814 *********** 2025-06-01 23:21:29.035350 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:29.035360 | orchestrator | 2025-06-01 23:21:29.035371 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-01 23:21:29.035382 | orchestrator | Sunday 01 June 2025 23:21:28 +0000 (0:00:00.332) 0:00:14.147 *********** 2025-06-01 23:21:29.035399 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:42.174267 | orchestrator | 2025-06-01 23:21:42.174380 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-01 23:21:42.174397 | orchestrator | Sunday 01 June 2025 23:21:29 +0000 (0:00:00.126) 0:00:14.274 *********** 2025-06-01 23:21:42.174409 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:42.174421 | orchestrator | 2025-06-01 23:21:42.174433 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 23:21:42.174444 | orchestrator | Sunday 01 June 2025 23:21:29 +0000 (0:00:00.137) 0:00:14.411 *********** 2025-06-01 23:21:42.174455 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:42.174466 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:42.174477 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:42.174487 | orchestrator | 2025-06-01 23:21:42.174545 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-01 23:21:42.174558 | orchestrator | Sunday 01 June 2025 23:21:29 +0000 (0:00:00.313) 0:00:14.724 *********** 2025-06-01 23:21:42.174569 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:21:42.174581 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:21:42.174592 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:21:42.174651 | orchestrator | 2025-06-01 23:21:42.174663 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-01 23:21:42.174674 | orchestrator | Sunday 01 June 2025 23:21:32 +0000 (0:00:02.625) 0:00:17.350 *********** 2025-06-01 23:21:42.174685 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:42.174696 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:42.174707 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:42.174718 | orchestrator | 2025-06-01 23:21:42.174728 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-01 23:21:42.174739 | orchestrator | Sunday 01 June 2025 23:21:32 +0000 (0:00:00.358) 0:00:17.709 *********** 2025-06-01 23:21:42.174750 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:42.174761 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:42.174772 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:42.174783 | orchestrator | 2025-06-01 23:21:42.174794 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-01 23:21:42.174806 | orchestrator | Sunday 01 June 2025 23:21:32 +0000 (0:00:00.499) 0:00:18.208 *********** 2025-06-01 23:21:42.174818 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:42.174830 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:21:42.174843 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:21:42.174855 | orchestrator | 2025-06-01 23:21:42.174867 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-01 23:21:42.174879 | orchestrator | Sunday 01 June 2025 23:21:33 +0000 (0:00:00.305) 0:00:18.514 *********** 2025-06-01 23:21:42.174891 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:42.174902 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:42.174914 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:42.174984 | orchestrator | 2025-06-01 23:21:42.175006 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-01 23:21:42.175025 | orchestrator | Sunday 01 June 2025 23:21:33 +0000 (0:00:00.584) 0:00:19.098 *********** 2025-06-01 23:21:42.175043 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:42.175055 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:21:42.175068 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:21:42.175080 | orchestrator | 2025-06-01 23:21:42.175092 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-01 23:21:42.175111 | orchestrator | Sunday 01 June 2025 23:21:34 +0000 (0:00:00.290) 0:00:19.389 *********** 2025-06-01 23:21:42.175124 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:42.175136 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:21:42.175149 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:21:42.175161 | orchestrator | 2025-06-01 23:21:42.175172 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-01 23:21:42.175183 | orchestrator | Sunday 01 June 2025 23:21:34 +0000 (0:00:00.295) 0:00:19.684 *********** 2025-06-01 23:21:42.175194 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:42.175205 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:42.175215 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:42.175226 | orchestrator | 2025-06-01 23:21:42.175237 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-01 23:21:42.175248 | orchestrator | Sunday 01 June 2025 23:21:34 +0000 (0:00:00.515) 0:00:20.200 *********** 2025-06-01 23:21:42.175258 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:42.175269 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:42.175280 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:42.175290 | orchestrator | 2025-06-01 23:21:42.175301 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-01 23:21:42.175312 | orchestrator | Sunday 01 June 2025 23:21:35 +0000 (0:00:00.773) 0:00:20.973 *********** 2025-06-01 23:21:42.175322 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:42.175333 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:42.175344 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:42.175355 | orchestrator | 2025-06-01 23:21:42.175366 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-01 23:21:42.175376 | orchestrator | Sunday 01 June 2025 23:21:36 +0000 (0:00:00.350) 0:00:21.324 *********** 2025-06-01 23:21:42.175387 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:42.175398 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:21:42.175409 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:21:42.175420 | orchestrator | 2025-06-01 23:21:42.175430 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-01 23:21:42.175441 | orchestrator | Sunday 01 June 2025 23:21:36 +0000 (0:00:00.294) 0:00:21.619 *********** 2025-06-01 23:21:42.175452 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:21:42.175462 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:21:42.175473 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:21:42.175484 | orchestrator | 2025-06-01 23:21:42.175495 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-01 23:21:42.175506 | orchestrator | Sunday 01 June 2025 23:21:36 +0000 (0:00:00.317) 0:00:21.936 *********** 2025-06-01 23:21:42.175516 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:42.175527 | orchestrator | 2025-06-01 23:21:42.175538 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-01 23:21:42.175549 | orchestrator | Sunday 01 June 2025 23:21:37 +0000 (0:00:00.786) 0:00:22.722 *********** 2025-06-01 23:21:42.175560 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:21:42.175571 | orchestrator | 2025-06-01 23:21:42.175600 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-01 23:21:42.175611 | orchestrator | Sunday 01 June 2025 23:21:37 +0000 (0:00:00.277) 0:00:23.000 *********** 2025-06-01 23:21:42.175622 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:42.175641 | orchestrator | 2025-06-01 23:21:42.175652 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-01 23:21:42.175662 | orchestrator | Sunday 01 June 2025 23:21:39 +0000 (0:00:01.670) 0:00:24.671 *********** 2025-06-01 23:21:42.175673 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:42.175683 | orchestrator | 2025-06-01 23:21:42.175694 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-01 23:21:42.175705 | orchestrator | Sunday 01 June 2025 23:21:39 +0000 (0:00:00.257) 0:00:24.928 *********** 2025-06-01 23:21:42.175715 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:42.175726 | orchestrator | 2025-06-01 23:21:42.175737 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:21:42.175747 | orchestrator | Sunday 01 June 2025 23:21:39 +0000 (0:00:00.258) 0:00:25.186 *********** 2025-06-01 23:21:42.175758 | orchestrator | 2025-06-01 23:21:42.175769 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:21:42.175779 | orchestrator | Sunday 01 June 2025 23:21:40 +0000 (0:00:00.085) 0:00:25.271 *********** 2025-06-01 23:21:42.175790 | orchestrator | 2025-06-01 23:21:42.175800 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-01 23:21:42.175813 | orchestrator | Sunday 01 June 2025 23:21:40 +0000 (0:00:00.068) 0:00:25.340 *********** 2025-06-01 23:21:42.175831 | orchestrator | 2025-06-01 23:21:42.175849 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-01 23:21:42.175868 | orchestrator | Sunday 01 June 2025 23:21:40 +0000 (0:00:00.069) 0:00:25.410 *********** 2025-06-01 23:21:42.175886 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 23:21:42.175905 | orchestrator | 2025-06-01 23:21:42.175918 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-01 23:21:42.175929 | orchestrator | Sunday 01 June 2025 23:21:41 +0000 (0:00:01.341) 0:00:26.752 *********** 2025-06-01 23:21:42.175969 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-01 23:21:42.175981 | orchestrator |  "msg": [ 2025-06-01 23:21:42.175993 | orchestrator |  "Validator run completed.", 2025-06-01 23:21:42.176004 | orchestrator |  "You can find the report file here:", 2025-06-01 23:21:42.176015 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-01T23:21:15+00:00-report.json", 2025-06-01 23:21:42.176027 | orchestrator |  "on the following host:", 2025-06-01 23:21:42.176038 | orchestrator |  "testbed-manager" 2025-06-01 23:21:42.176048 | orchestrator |  ] 2025-06-01 23:21:42.176060 | orchestrator | } 2025-06-01 23:21:42.176071 | orchestrator | 2025-06-01 23:21:42.176081 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:21:42.176094 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-01 23:21:42.176111 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-01 23:21:42.176123 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-01 23:21:42.176134 | orchestrator | 2025-06-01 23:21:42.176145 | orchestrator | 2025-06-01 23:21:42.176156 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:21:42.176167 | orchestrator | Sunday 01 June 2025 23:21:42 +0000 (0:00:00.641) 0:00:27.393 *********** 2025-06-01 23:21:42.176178 | orchestrator | =============================================================================== 2025-06-01 23:21:42.176189 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.63s 2025-06-01 23:21:42.176199 | orchestrator | Aggregate test results step one ----------------------------------------- 1.67s 2025-06-01 23:21:42.176210 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.61s 2025-06-01 23:21:42.176229 | orchestrator | Write report file ------------------------------------------------------- 1.34s 2025-06-01 23:21:42.176240 | orchestrator | Create report output directory ------------------------------------------ 1.09s 2025-06-01 23:21:42.176251 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.84s 2025-06-01 23:21:42.176262 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.79s 2025-06-01 23:21:42.176272 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.77s 2025-06-01 23:21:42.176283 | orchestrator | Aggregate test results step one ----------------------------------------- 0.77s 2025-06-01 23:21:42.176294 | orchestrator | Get timestamp for report file ------------------------------------------- 0.67s 2025-06-01 23:21:42.176304 | orchestrator | Print report file information ------------------------------------------- 0.64s 2025-06-01 23:21:42.176315 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.63s 2025-06-01 23:21:42.176326 | orchestrator | Prepare test data ------------------------------------------------------- 0.61s 2025-06-01 23:21:42.176336 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.60s 2025-06-01 23:21:42.176347 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.58s 2025-06-01 23:21:42.176358 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.57s 2025-06-01 23:21:42.176376 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2025-06-01 23:21:42.532927 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.50s 2025-06-01 23:21:42.533069 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.45s 2025-06-01 23:21:42.533082 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.44s 2025-06-01 23:21:42.869894 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-01 23:21:42.877286 | orchestrator | + set -e 2025-06-01 23:21:42.877322 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 23:21:42.877336 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 23:21:42.877347 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 23:21:42.877358 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 23:21:42.877370 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 23:21:42.877381 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 23:21:42.877394 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 23:21:42.877405 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-01 23:21:42.877416 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-01 23:21:42.877427 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 23:21:42.877438 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 23:21:42.877449 | orchestrator | ++ export ARA=false 2025-06-01 23:21:42.877460 | orchestrator | ++ ARA=false 2025-06-01 23:21:42.877470 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 23:21:42.877481 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 23:21:42.877492 | orchestrator | ++ export TEMPEST=false 2025-06-01 23:21:42.877502 | orchestrator | ++ TEMPEST=false 2025-06-01 23:21:42.877513 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 23:21:42.877524 | orchestrator | ++ IS_ZUUL=true 2025-06-01 23:21:42.877535 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-06-01 23:21:42.877545 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-06-01 23:21:42.877556 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 23:21:42.877567 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 23:21:42.877578 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 23:21:42.877588 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 23:21:42.877599 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 23:21:42.877610 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 23:21:42.877620 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 23:21:42.877631 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 23:21:42.877642 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-01 23:21:42.877652 | orchestrator | + source /etc/os-release 2025-06-01 23:21:42.877663 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-01 23:21:42.877674 | orchestrator | ++ NAME=Ubuntu 2025-06-01 23:21:42.877684 | orchestrator | ++ VERSION_ID=24.04 2025-06-01 23:21:42.877695 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-01 23:21:42.877707 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-01 23:21:42.877744 | orchestrator | ++ ID=ubuntu 2025-06-01 23:21:42.877755 | orchestrator | ++ ID_LIKE=debian 2025-06-01 23:21:42.877766 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-01 23:21:42.877777 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-01 23:21:42.877788 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-01 23:21:42.877799 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-01 23:21:42.877810 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-01 23:21:42.877821 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-01 23:21:42.877831 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-01 23:21:42.877843 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-01 23:21:42.877855 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-01 23:21:42.905189 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-01 23:22:05.988598 | orchestrator | 2025-06-01 23:22:05.988753 | orchestrator | # Status of Elasticsearch 2025-06-01 23:22:05.988770 | orchestrator | 2025-06-01 23:22:05.988782 | orchestrator | + pushd /opt/configuration/contrib 2025-06-01 23:22:05.988796 | orchestrator | + echo 2025-06-01 23:22:05.988807 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-01 23:22:05.988818 | orchestrator | + echo 2025-06-01 23:22:05.988830 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-01 23:22:06.215347 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-01 23:22:06.215729 | orchestrator | 2025-06-01 23:22:06.215757 | orchestrator | # Status of MariaDB 2025-06-01 23:22:06.215771 | orchestrator | 2025-06-01 23:22:06.215783 | orchestrator | + echo 2025-06-01 23:22:06.215794 | orchestrator | + echo '# Status of MariaDB' 2025-06-01 23:22:06.215806 | orchestrator | + echo 2025-06-01 23:22:06.215817 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-01 23:22:06.215830 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-01 23:22:06.291249 | orchestrator | Reading package lists... 2025-06-01 23:22:06.667150 | orchestrator | Building dependency tree... 2025-06-01 23:22:06.667512 | orchestrator | Reading state information... 2025-06-01 23:22:07.129160 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-01 23:22:07.129322 | orchestrator | bc set to manually installed. 2025-06-01 23:22:07.129351 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-01 23:22:07.864068 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-01 23:22:07.865114 | orchestrator | 2025-06-01 23:22:07.865129 | orchestrator | # Status of Prometheus 2025-06-01 23:22:07.865135 | orchestrator | 2025-06-01 23:22:07.865139 | orchestrator | + echo 2025-06-01 23:22:07.865144 | orchestrator | + echo '# Status of Prometheus' 2025-06-01 23:22:07.865149 | orchestrator | + echo 2025-06-01 23:22:07.865154 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-01 23:22:07.916805 | orchestrator | Unauthorized 2025-06-01 23:22:07.921143 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-01 23:22:08.006395 | orchestrator | Unauthorized 2025-06-01 23:22:08.011890 | orchestrator | 2025-06-01 23:22:08.011923 | orchestrator | # Status of RabbitMQ 2025-06-01 23:22:08.011968 | orchestrator | 2025-06-01 23:22:08.011981 | orchestrator | + echo 2025-06-01 23:22:08.011992 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-01 23:22:08.012004 | orchestrator | + echo 2025-06-01 23:22:08.012017 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-01 23:22:08.513892 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-01 23:22:08.524459 | orchestrator | 2025-06-01 23:22:08.524490 | orchestrator | # Status of Redis 2025-06-01 23:22:08.524502 | orchestrator | 2025-06-01 23:22:08.524514 | orchestrator | + echo 2025-06-01 23:22:08.524525 | orchestrator | + echo '# Status of Redis' 2025-06-01 23:22:08.524537 | orchestrator | + echo 2025-06-01 23:22:08.524551 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-01 23:22:08.530260 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002096s;;;0.000000;10.000000 2025-06-01 23:22:08.530848 | orchestrator | 2025-06-01 23:22:08.530871 | orchestrator | # Create backup of MariaDB database 2025-06-01 23:22:08.530885 | orchestrator | + popd 2025-06-01 23:22:08.530897 | orchestrator | + echo 2025-06-01 23:22:08.530909 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-01 23:22:08.530921 | orchestrator | + echo 2025-06-01 23:22:08.530933 | orchestrator | 2025-06-01 23:22:08.530971 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-01 23:22:10.416873 | orchestrator | 2025-06-01 23:22:10 | INFO  | Task 8024e039-51be-47ea-9598-84bd3dc1d4b2 (mariadb_backup) was prepared for execution. 2025-06-01 23:22:10.417055 | orchestrator | 2025-06-01 23:22:10 | INFO  | It takes a moment until task 8024e039-51be-47ea-9598-84bd3dc1d4b2 (mariadb_backup) has been started and output is visible here. 2025-06-01 23:22:14.819075 | orchestrator | 2025-06-01 23:22:14.819204 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:22:14.820830 | orchestrator | 2025-06-01 23:22:14.822597 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:22:14.822903 | orchestrator | Sunday 01 June 2025 23:22:14 +0000 (0:00:00.192) 0:00:00.192 *********** 2025-06-01 23:22:15.026432 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:22:15.148075 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:22:15.148642 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:22:15.149788 | orchestrator | 2025-06-01 23:22:15.150481 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:22:15.151414 | orchestrator | Sunday 01 June 2025 23:22:15 +0000 (0:00:00.332) 0:00:00.524 *********** 2025-06-01 23:22:15.803857 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-01 23:22:15.804594 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-01 23:22:15.806837 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-01 23:22:15.806876 | orchestrator | 2025-06-01 23:22:15.807988 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-01 23:22:15.809004 | orchestrator | 2025-06-01 23:22:15.809584 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-01 23:22:15.810085 | orchestrator | Sunday 01 June 2025 23:22:15 +0000 (0:00:00.655) 0:00:01.180 *********** 2025-06-01 23:22:16.254345 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 23:22:16.254472 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-01 23:22:16.256390 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-01 23:22:16.257829 | orchestrator | 2025-06-01 23:22:16.259566 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 23:22:16.261413 | orchestrator | Sunday 01 June 2025 23:22:16 +0000 (0:00:00.448) 0:00:01.628 *********** 2025-06-01 23:22:16.813398 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:22:16.815423 | orchestrator | 2025-06-01 23:22:16.817124 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-01 23:22:16.818248 | orchestrator | Sunday 01 June 2025 23:22:16 +0000 (0:00:00.561) 0:00:02.190 *********** 2025-06-01 23:22:20.220185 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:22:20.221100 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:22:20.222190 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:22:20.222694 | orchestrator | 2025-06-01 23:22:20.223656 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-01 23:22:20.225314 | orchestrator | Sunday 01 June 2025 23:22:20 +0000 (0:00:03.400) 0:00:05.590 *********** 2025-06-01 23:23:09.696285 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-01 23:23:09.696416 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-01 23:23:09.696433 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-01 23:23:09.696474 | orchestrator | mariadb_bootstrap_restart 2025-06-01 23:23:09.770877 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:23:09.771139 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:23:09.772137 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:23:09.776636 | orchestrator | 2025-06-01 23:23:09.776667 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-01 23:23:09.776681 | orchestrator | skipping: no hosts matched 2025-06-01 23:23:09.776692 | orchestrator | 2025-06-01 23:23:09.776704 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-01 23:23:09.777000 | orchestrator | skipping: no hosts matched 2025-06-01 23:23:09.777767 | orchestrator | 2025-06-01 23:23:09.778079 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-01 23:23:09.779118 | orchestrator | skipping: no hosts matched 2025-06-01 23:23:09.781696 | orchestrator | 2025-06-01 23:23:09.781737 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-01 23:23:09.781750 | orchestrator | 2025-06-01 23:23:09.781762 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-01 23:23:09.781774 | orchestrator | Sunday 01 June 2025 23:23:09 +0000 (0:00:49.557) 0:00:55.148 *********** 2025-06-01 23:23:09.957133 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:23:10.080619 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:23:10.081496 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:23:10.082418 | orchestrator | 2025-06-01 23:23:10.083870 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-01 23:23:10.084999 | orchestrator | Sunday 01 June 2025 23:23:10 +0000 (0:00:00.310) 0:00:55.458 *********** 2025-06-01 23:23:10.537117 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:23:10.583861 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:23:10.584731 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:23:10.585108 | orchestrator | 2025-06-01 23:23:10.587095 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:23:10.587122 | orchestrator | 2025-06-01 23:23:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 23:23:10.587137 | orchestrator | 2025-06-01 23:23:10 | INFO  | Please wait and do not abort execution. 2025-06-01 23:23:10.587822 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:23:10.588552 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 23:23:10.589541 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 23:23:10.590441 | orchestrator | 2025-06-01 23:23:10.591403 | orchestrator | 2025-06-01 23:23:10.592123 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:23:10.592720 | orchestrator | Sunday 01 June 2025 23:23:10 +0000 (0:00:00.502) 0:00:55.961 *********** 2025-06-01 23:23:10.593418 | orchestrator | =============================================================================== 2025-06-01 23:23:10.594159 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 49.56s 2025-06-01 23:23:10.594477 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.40s 2025-06-01 23:23:10.595290 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-06-01 23:23:10.595558 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2025-06-01 23:23:10.596047 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.50s 2025-06-01 23:23:10.596069 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.45s 2025-06-01 23:23:10.596527 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-01 23:23:10.597074 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-06-01 23:23:11.324821 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-01 23:23:11.331876 | orchestrator | + set -e 2025-06-01 23:23:11.331921 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 23:23:11.331936 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 23:23:11.331977 | orchestrator | ++ INTERACTIVE=false 2025-06-01 23:23:11.331990 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 23:23:11.332001 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 23:23:11.332012 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-01 23:23:11.332820 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-01 23:23:11.336312 | orchestrator | 2025-06-01 23:23:11.336359 | orchestrator | # OpenStack endpoints 2025-06-01 23:23:11.336372 | orchestrator | 2025-06-01 23:23:11.336383 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-01 23:23:11.336394 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-01 23:23:11.336406 | orchestrator | + export OS_CLOUD=admin 2025-06-01 23:23:11.336417 | orchestrator | + OS_CLOUD=admin 2025-06-01 23:23:11.336428 | orchestrator | + echo 2025-06-01 23:23:11.336439 | orchestrator | + echo '# OpenStack endpoints' 2025-06-01 23:23:11.336450 | orchestrator | + echo 2025-06-01 23:23:11.336461 | orchestrator | + openstack endpoint list 2025-06-01 23:23:14.753516 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-01 23:23:14.754321 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-01 23:23:14.754361 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-01 23:23:14.754394 | orchestrator | | 03d69e5bd20d488d83eaf5dc35c46c19 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-01 23:23:14.754408 | orchestrator | | 0ac5766745ac4ac991b0c6c42505c08b | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-01 23:23:14.754421 | orchestrator | | 0bea1aaa804d47e693183fbebd5b0114 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-01 23:23:14.754433 | orchestrator | | 0de1e2463de843f0968b60378a423e24 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-01 23:23:14.754446 | orchestrator | | 1d1eaa33a4b8483984b526cbaeff7d3b | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-01 23:23:14.754457 | orchestrator | | 1d4900c556054dc299ae4b22e26e84c7 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-01 23:23:14.754468 | orchestrator | | 2104be41115c45598ab2e77594c36138 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-01 23:23:14.754478 | orchestrator | | 2f6a84f951ea423ca93abb803ddece33 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-01 23:23:14.754489 | orchestrator | | 49f9b90ff3c24b77b308d29e7a5612b4 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-01 23:23:14.754499 | orchestrator | | 5adc008c4bf54efcaf94fbfc36846762 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-01 23:23:14.754510 | orchestrator | | 738ae39ad4bc4fb092d539eb1d5273a2 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-01 23:23:14.754541 | orchestrator | | 7e04e8f02aab4245930ddc1a9249abdc | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-01 23:23:14.754552 | orchestrator | | 7f885c59ce4a4df98e7c733bafacc633 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-01 23:23:14.754563 | orchestrator | | 823997cc467845eda839e11e9e5cdd22 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-01 23:23:14.754573 | orchestrator | | 88a49da7eecb4d2491f5d383ee19e624 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-01 23:23:14.754584 | orchestrator | | 88de153f407c440b9ca3bdfb8d9e255e | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-01 23:23:14.754595 | orchestrator | | 8fa46d4d438e440a99c90473e7644059 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-01 23:23:14.754606 | orchestrator | | a36d562526f047a1a86256e6f261f081 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-01 23:23:14.754616 | orchestrator | | b07e6f3e83b34993b22e0535d2a66f28 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-01 23:23:14.754627 | orchestrator | | b445e18e589749e58ab9965881a27e3e | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-01 23:23:14.754660 | orchestrator | | e7ab49dbb4804de4b7b070d45018e5c4 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-01 23:23:14.754672 | orchestrator | | ff819d3bdf6d40debad5f6f4e2133cb9 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-01 23:23:14.754683 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-01 23:23:15.097768 | orchestrator | 2025-06-01 23:23:15.097868 | orchestrator | # Cinder 2025-06-01 23:23:15.097882 | orchestrator | 2025-06-01 23:23:15.097894 | orchestrator | + echo 2025-06-01 23:23:15.097906 | orchestrator | + echo '# Cinder' 2025-06-01 23:23:15.097918 | orchestrator | + echo 2025-06-01 23:23:15.097930 | orchestrator | + openstack volume service list 2025-06-01 23:23:18.804084 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-01 23:23:18.804210 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-01 23:23:18.804227 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-01 23:23:18.804239 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-01T23:23:09.000000 | 2025-06-01 23:23:18.804251 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-01T23:23:11.000000 | 2025-06-01 23:23:18.804262 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-01T23:23:11.000000 | 2025-06-01 23:23:18.804273 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-01T23:23:12.000000 | 2025-06-01 23:23:18.804285 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-01T23:23:15.000000 | 2025-06-01 23:23:18.804296 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-01T23:23:08.000000 | 2025-06-01 23:23:18.804307 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-01T23:23:15.000000 | 2025-06-01 23:23:18.804339 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-01T23:23:15.000000 | 2025-06-01 23:23:18.804351 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-01T23:23:16.000000 | 2025-06-01 23:23:18.804362 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-01 23:23:19.157452 | orchestrator | 2025-06-01 23:23:19.157576 | orchestrator | # Neutron 2025-06-01 23:23:19.157600 | orchestrator | 2025-06-01 23:23:19.157620 | orchestrator | + echo 2025-06-01 23:23:19.157640 | orchestrator | + echo '# Neutron' 2025-06-01 23:23:19.157661 | orchestrator | + echo 2025-06-01 23:23:19.157678 | orchestrator | + openstack network agent list 2025-06-01 23:23:21.994256 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-01 23:23:21.994370 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-01 23:23:21.994385 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-01 23:23:21.994397 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-01 23:23:21.994409 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-01 23:23:21.994419 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-01 23:23:21.994430 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-01 23:23:21.994441 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-01 23:23:21.994452 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-01 23:23:21.994462 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-01 23:23:21.994473 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-01 23:23:21.994484 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-01 23:23:21.994494 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-01 23:23:22.340067 | orchestrator | + openstack network service provider list 2025-06-01 23:23:24.966856 | orchestrator | +---------------+------+---------+ 2025-06-01 23:23:24.967049 | orchestrator | | Service Type | Name | Default | 2025-06-01 23:23:24.967068 | orchestrator | +---------------+------+---------+ 2025-06-01 23:23:24.967080 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-01 23:23:24.967091 | orchestrator | +---------------+------+---------+ 2025-06-01 23:23:25.326692 | orchestrator | 2025-06-01 23:23:25.326794 | orchestrator | # Nova 2025-06-01 23:23:25.326811 | orchestrator | 2025-06-01 23:23:25.326823 | orchestrator | + echo 2025-06-01 23:23:25.326835 | orchestrator | + echo '# Nova' 2025-06-01 23:23:25.326847 | orchestrator | + echo 2025-06-01 23:23:25.326859 | orchestrator | + openstack compute service list 2025-06-01 23:23:28.193401 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-01 23:23:28.193511 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-01 23:23:28.193526 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-01 23:23:28.193579 | orchestrator | | cd18a76a-df95-4bdb-bba6-717ab11a993a | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-01T23:23:21.000000 | 2025-06-01 23:23:28.193591 | orchestrator | | bdc48ddf-da07-4045-b7d4-e9f3ad27f8ad | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-01T23:23:23.000000 | 2025-06-01 23:23:28.193602 | orchestrator | | 24ea17c4-4ae6-4a6e-9408-100d8b634181 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-01T23:23:24.000000 | 2025-06-01 23:23:28.193613 | orchestrator | | b7929d1c-fec3-4d0a-99cb-3dee393fd091 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-01T23:23:21.000000 | 2025-06-01 23:23:28.193624 | orchestrator | | ed3998fb-c246-461e-9369-c4669d9beee4 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-01T23:23:22.000000 | 2025-06-01 23:23:28.193634 | orchestrator | | 0a085d6e-3637-4464-b71c-d4dfb902b43b | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-01T23:23:26.000000 | 2025-06-01 23:23:28.193645 | orchestrator | | 6715710c-4139-426a-8a4e-04609e217c64 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-01T23:23:24.000000 | 2025-06-01 23:23:28.193656 | orchestrator | | d03c05f9-b9be-4d30-8838-a9727f266fed | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-01T23:23:23.000000 | 2025-06-01 23:23:28.193666 | orchestrator | | d83e9137-f0ab-4e57-8820-8da0ddef1fbe | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-01T23:23:23.000000 | 2025-06-01 23:23:28.193677 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-01 23:23:28.578865 | orchestrator | + openstack hypervisor list 2025-06-01 23:23:33.043869 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-01 23:23:33.044039 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-01 23:23:33.044054 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-01 23:23:33.044066 | orchestrator | | 8cd80b25-a8db-4808-802e-4aa330a54836 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-01 23:23:33.044077 | orchestrator | | 82a71253-d14a-4ecc-be7d-5d88c925c952 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-01 23:23:33.044088 | orchestrator | | 028ddf2b-895a-43b6-902b-9a169a06869e | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-01 23:23:33.044099 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-01 23:23:33.397303 | orchestrator | 2025-06-01 23:23:33.397403 | orchestrator | # Run OpenStack test play 2025-06-01 23:23:33.397417 | orchestrator | 2025-06-01 23:23:33.397428 | orchestrator | + echo 2025-06-01 23:23:33.397439 | orchestrator | + echo '# Run OpenStack test play' 2025-06-01 23:23:33.397450 | orchestrator | + echo 2025-06-01 23:23:33.397461 | orchestrator | + osism apply --environment openstack test 2025-06-01 23:23:35.157686 | orchestrator | 2025-06-01 23:23:35 | INFO  | Trying to run play test in environment openstack 2025-06-01 23:23:35.162727 | orchestrator | Registering Redlock._acquired_script 2025-06-01 23:23:35.162764 | orchestrator | Registering Redlock._extend_script 2025-06-01 23:23:35.162777 | orchestrator | Registering Redlock._release_script 2025-06-01 23:23:35.230470 | orchestrator | 2025-06-01 23:23:35 | INFO  | Task c03ca406-4a6e-48e5-8004-d3b7b142b421 (test) was prepared for execution. 2025-06-01 23:23:35.230540 | orchestrator | 2025-06-01 23:23:35 | INFO  | It takes a moment until task c03ca406-4a6e-48e5-8004-d3b7b142b421 (test) has been started and output is visible here. 2025-06-01 23:23:39.403127 | orchestrator | 2025-06-01 23:23:39.403251 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-01 23:23:39.406542 | orchestrator | 2025-06-01 23:23:39.406581 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-01 23:23:39.406593 | orchestrator | Sunday 01 June 2025 23:23:39 +0000 (0:00:00.083) 0:00:00.083 *********** 2025-06-01 23:23:43.152778 | orchestrator | changed: [localhost] 2025-06-01 23:23:43.153695 | orchestrator | 2025-06-01 23:23:43.154143 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-01 23:23:43.154193 | orchestrator | Sunday 01 June 2025 23:23:43 +0000 (0:00:03.752) 0:00:03.835 *********** 2025-06-01 23:23:47.468414 | orchestrator | changed: [localhost] 2025-06-01 23:23:47.468532 | orchestrator | 2025-06-01 23:23:47.469453 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-01 23:23:47.470396 | orchestrator | Sunday 01 June 2025 23:23:47 +0000 (0:00:04.314) 0:00:08.150 *********** 2025-06-01 23:23:53.823694 | orchestrator | changed: [localhost] 2025-06-01 23:23:53.823812 | orchestrator | 2025-06-01 23:23:53.824783 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-01 23:23:53.825750 | orchestrator | Sunday 01 June 2025 23:23:53 +0000 (0:00:06.357) 0:00:14.508 *********** 2025-06-01 23:23:57.927553 | orchestrator | changed: [localhost] 2025-06-01 23:23:57.928167 | orchestrator | 2025-06-01 23:23:57.928449 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-01 23:23:57.932194 | orchestrator | Sunday 01 June 2025 23:23:57 +0000 (0:00:04.100) 0:00:18.609 *********** 2025-06-01 23:24:02.198072 | orchestrator | changed: [localhost] 2025-06-01 23:24:02.198186 | orchestrator | 2025-06-01 23:24:02.199623 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-01 23:24:02.200054 | orchestrator | Sunday 01 June 2025 23:24:02 +0000 (0:00:04.266) 0:00:22.875 *********** 2025-06-01 23:24:15.025175 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-01 23:24:15.025297 | orchestrator | changed: [localhost] => (item=member) 2025-06-01 23:24:15.025431 | orchestrator | changed: [localhost] => (item=creator) 2025-06-01 23:24:15.026337 | orchestrator | 2025-06-01 23:24:15.028393 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-01 23:24:15.029028 | orchestrator | Sunday 01 June 2025 23:24:15 +0000 (0:00:12.826) 0:00:35.701 *********** 2025-06-01 23:24:19.438430 | orchestrator | changed: [localhost] 2025-06-01 23:24:19.438768 | orchestrator | 2025-06-01 23:24:19.438802 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-01 23:24:19.439580 | orchestrator | Sunday 01 June 2025 23:24:19 +0000 (0:00:04.418) 0:00:40.120 *********** 2025-06-01 23:24:24.602243 | orchestrator | changed: [localhost] 2025-06-01 23:24:24.602367 | orchestrator | 2025-06-01 23:24:24.603738 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-01 23:24:24.606415 | orchestrator | Sunday 01 June 2025 23:24:24 +0000 (0:00:05.164) 0:00:45.284 *********** 2025-06-01 23:24:28.898231 | orchestrator | changed: [localhost] 2025-06-01 23:24:28.898335 | orchestrator | 2025-06-01 23:24:28.898591 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-01 23:24:28.900418 | orchestrator | Sunday 01 June 2025 23:24:28 +0000 (0:00:04.297) 0:00:49.581 *********** 2025-06-01 23:24:32.971440 | orchestrator | changed: [localhost] 2025-06-01 23:24:32.972680 | orchestrator | 2025-06-01 23:24:32.973259 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-01 23:24:32.975599 | orchestrator | Sunday 01 June 2025 23:24:32 +0000 (0:00:04.073) 0:00:53.655 *********** 2025-06-01 23:24:37.326165 | orchestrator | changed: [localhost] 2025-06-01 23:24:37.327150 | orchestrator | 2025-06-01 23:24:37.328271 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-01 23:24:37.328626 | orchestrator | Sunday 01 June 2025 23:24:37 +0000 (0:00:04.354) 0:00:58.010 *********** 2025-06-01 23:24:41.316833 | orchestrator | changed: [localhost] 2025-06-01 23:24:41.317149 | orchestrator | 2025-06-01 23:24:41.317930 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-01 23:24:41.318550 | orchestrator | Sunday 01 June 2025 23:24:41 +0000 (0:00:03.991) 0:01:02.002 *********** 2025-06-01 23:24:55.905203 | orchestrator | changed: [localhost] 2025-06-01 23:24:55.905350 | orchestrator | 2025-06-01 23:24:55.905408 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-01 23:24:55.908047 | orchestrator | Sunday 01 June 2025 23:24:55 +0000 (0:00:14.582) 0:01:16.584 *********** 2025-06-01 23:27:09.248781 | orchestrator | changed: [localhost] => (item=test) 2025-06-01 23:27:09.248938 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-01 23:27:09.248954 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-01 23:27:09.248965 | orchestrator | 2025-06-01 23:27:09.248977 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-01 23:27:39.251299 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-01 23:27:39.251406 | orchestrator | 2025-06-01 23:27:39.251423 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-01 23:28:09.249384 | orchestrator | 2025-06-01 23:28:09.251406 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-01 23:28:09.384581 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-01 23:28:09.385406 | orchestrator | 2025-06-01 23:28:09.385816 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-01 23:28:09.387462 | orchestrator | Sunday 01 June 2025 23:28:09 +0000 (0:03:13.484) 0:04:30.068 *********** 2025-06-01 23:28:33.837803 | orchestrator | changed: [localhost] => (item=test) 2025-06-01 23:28:33.837949 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-01 23:28:33.837966 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-01 23:28:33.837978 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-01 23:28:33.837989 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-01 23:28:33.838001 | orchestrator | 2025-06-01 23:28:33.838074 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-01 23:28:33.838091 | orchestrator | Sunday 01 June 2025 23:28:33 +0000 (0:00:24.446) 0:04:54.515 *********** 2025-06-01 23:29:07.601275 | orchestrator | changed: [localhost] => (item=test) 2025-06-01 23:29:07.601531 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-01 23:29:07.601550 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-01 23:29:07.601562 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-01 23:29:07.601573 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-01 23:29:07.601584 | orchestrator | 2025-06-01 23:29:07.601597 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-01 23:29:07.601623 | orchestrator | Sunday 01 June 2025 23:29:07 +0000 (0:00:33.764) 0:05:28.280 *********** 2025-06-01 23:29:14.609395 | orchestrator | changed: [localhost] 2025-06-01 23:29:14.612527 | orchestrator | 2025-06-01 23:29:14.619589 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-01 23:29:14.619689 | orchestrator | Sunday 01 June 2025 23:29:14 +0000 (0:00:07.012) 0:05:35.293 *********** 2025-06-01 23:29:28.325396 | orchestrator | changed: [localhost] 2025-06-01 23:29:28.325541 | orchestrator | 2025-06-01 23:29:28.325899 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-01 23:29:28.327078 | orchestrator | Sunday 01 June 2025 23:29:28 +0000 (0:00:13.712) 0:05:49.005 *********** 2025-06-01 23:29:33.661841 | orchestrator | ok: [localhost] 2025-06-01 23:29:33.662370 | orchestrator | 2025-06-01 23:29:33.663800 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-01 23:29:33.664731 | orchestrator | Sunday 01 June 2025 23:29:33 +0000 (0:00:05.334) 0:05:54.340 *********** 2025-06-01 23:29:33.721203 | orchestrator | ok: [localhost] => { 2025-06-01 23:29:33.721254 | orchestrator |  "msg": "192.168.112.116" 2025-06-01 23:29:33.721268 | orchestrator | } 2025-06-01 23:29:33.721280 | orchestrator | 2025-06-01 23:29:33.721292 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:29:33.721304 | orchestrator | 2025-06-01 23:29:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 23:29:33.723779 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:29:33.723846 | orchestrator | 2025-06-01 23:29:33.723861 | orchestrator | 2025-06-01 23:29:33 | INFO  | Please wait and do not abort execution. 2025-06-01 23:29:33.724492 | orchestrator | 2025-06-01 23:29:33.727943 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:29:33.727968 | orchestrator | Sunday 01 June 2025 23:29:33 +0000 (0:00:00.055) 0:05:54.396 *********** 2025-06-01 23:29:33.727981 | orchestrator | =============================================================================== 2025-06-01 23:29:33.728723 | orchestrator | Create test instances ------------------------------------------------- 193.48s 2025-06-01 23:29:33.729709 | orchestrator | Add tag to instances --------------------------------------------------- 33.76s 2025-06-01 23:29:33.730246 | orchestrator | Add metadata to instances ---------------------------------------------- 24.45s 2025-06-01 23:29:33.731663 | orchestrator | Create test network topology ------------------------------------------- 14.58s 2025-06-01 23:29:33.733533 | orchestrator | Attach test volume ----------------------------------------------------- 13.71s 2025-06-01 23:29:33.734750 | orchestrator | Add member roles to user test ------------------------------------------ 12.83s 2025-06-01 23:29:33.735096 | orchestrator | Create test volume ------------------------------------------------------ 7.01s 2025-06-01 23:29:33.735592 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.36s 2025-06-01 23:29:33.736984 | orchestrator | Create floating ip address ---------------------------------------------- 5.33s 2025-06-01 23:29:33.737005 | orchestrator | Create ssh security group ----------------------------------------------- 5.16s 2025-06-01 23:29:33.737884 | orchestrator | Create test server group ------------------------------------------------ 4.42s 2025-06-01 23:29:33.738292 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.35s 2025-06-01 23:29:33.742698 | orchestrator | Create test-admin user -------------------------------------------------- 4.31s 2025-06-01 23:29:33.742719 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.30s 2025-06-01 23:29:33.742731 | orchestrator | Create test user -------------------------------------------------------- 4.27s 2025-06-01 23:29:33.742743 | orchestrator | Create test project ----------------------------------------------------- 4.10s 2025-06-01 23:29:33.742754 | orchestrator | Create icmp security group ---------------------------------------------- 4.07s 2025-06-01 23:29:33.742765 | orchestrator | Create test keypair ----------------------------------------------------- 3.99s 2025-06-01 23:29:33.742776 | orchestrator | Create test domain ------------------------------------------------------ 3.75s 2025-06-01 23:29:33.742787 | orchestrator | Print floating ip address ----------------------------------------------- 0.06s 2025-06-01 23:29:34.416527 | orchestrator | + server_list 2025-06-01 23:29:34.416651 | orchestrator | + openstack --os-cloud test server list 2025-06-01 23:29:38.210784 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-01 23:29:38.210894 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-01 23:29:38.210909 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-01 23:29:38.210921 | orchestrator | | 82b6ac01-8283-4de7-989f-6a9fb39994f6 | test-4 | ACTIVE | auto_allocated_network=10.42.0.48, 192.168.112.167 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-01 23:29:38.210932 | orchestrator | | 4cead3dd-f53b-4fba-ba7e-e6c37126bc3f | test-3 | ACTIVE | auto_allocated_network=10.42.0.16, 192.168.112.130 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-01 23:29:38.210943 | orchestrator | | e1b63f6e-54b6-4fdf-b9e0-90586b0cc0bd | test-2 | ACTIVE | auto_allocated_network=10.42.0.28, 192.168.112.171 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-01 23:29:38.210954 | orchestrator | | e1e3b923-725d-4c6e-929d-326571f62ad2 | test-1 | ACTIVE | auto_allocated_network=10.42.0.26, 192.168.112.127 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-01 23:29:38.210965 | orchestrator | | 3263c373-bc9a-43a7-a15b-ba716152ff3c | test | ACTIVE | auto_allocated_network=10.42.0.9, 192.168.112.116 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-01 23:29:38.211005 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-01 23:29:38.596349 | orchestrator | + openstack --os-cloud test server show test 2025-06-01 23:29:41.910720 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:41.910829 | orchestrator | | Field | Value | 2025-06-01 23:29:41.910852 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:41.910864 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-01 23:29:41.910876 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-01 23:29:41.910888 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-01 23:29:41.910899 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-01 23:29:41.910910 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-01 23:29:41.910922 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-01 23:29:41.910933 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-01 23:29:41.910962 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-01 23:29:41.910990 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-01 23:29:41.911001 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-01 23:29:41.911013 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-01 23:29:41.911028 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-01 23:29:41.911039 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-01 23:29:41.911051 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-01 23:29:41.911061 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-01 23:29:41.911072 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-01T23:25:26.000000 | 2025-06-01 23:29:41.911083 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-01 23:29:41.911094 | orchestrator | | accessIPv4 | | 2025-06-01 23:29:41.911112 | orchestrator | | accessIPv6 | | 2025-06-01 23:29:41.911123 | orchestrator | | addresses | auto_allocated_network=10.42.0.9, 192.168.112.116 | 2025-06-01 23:29:41.911140 | orchestrator | | config_drive | | 2025-06-01 23:29:41.911152 | orchestrator | | created | 2025-06-01T23:25:04Z | 2025-06-01 23:29:41.911166 | orchestrator | | description | None | 2025-06-01 23:29:41.911215 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-01 23:29:41.911227 | orchestrator | | hostId | f22bf07d99308ecb4b3c3247ba1a357927bd819c171caaa30a4e7fdb | 2025-06-01 23:29:41.911241 | orchestrator | | host_status | None | 2025-06-01 23:29:41.911253 | orchestrator | | id | 3263c373-bc9a-43a7-a15b-ba716152ff3c | 2025-06-01 23:29:41.911265 | orchestrator | | image | Cirros 0.6.2 (1c78d65b-f984-43bf-99d4-374ba439d24b) | 2025-06-01 23:29:41.911278 | orchestrator | | key_name | test | 2025-06-01 23:29:41.911297 | orchestrator | | locked | False | 2025-06-01 23:29:41.911310 | orchestrator | | locked_reason | None | 2025-06-01 23:29:41.911322 | orchestrator | | name | test | 2025-06-01 23:29:41.911342 | orchestrator | | pinned_availability_zone | None | 2025-06-01 23:29:41.911356 | orchestrator | | progress | 0 | 2025-06-01 23:29:41.911372 | orchestrator | | project_id | d51bfeb7f79c4e0c8fea2baf394b3019 | 2025-06-01 23:29:41.911385 | orchestrator | | properties | hostname='test' | 2025-06-01 23:29:41.911397 | orchestrator | | security_groups | name='ssh' | 2025-06-01 23:29:41.911410 | orchestrator | | | name='icmp' | 2025-06-01 23:29:41.911422 | orchestrator | | server_groups | None | 2025-06-01 23:29:41.911433 | orchestrator | | status | ACTIVE | 2025-06-01 23:29:41.911452 | orchestrator | | tags | test | 2025-06-01 23:29:41.911465 | orchestrator | | trusted_image_certificates | None | 2025-06-01 23:29:41.911477 | orchestrator | | updated | 2025-06-01T23:28:14Z | 2025-06-01 23:29:41.911493 | orchestrator | | user_id | 839cd847b7ed4e6cbb21e5d688360f2a | 2025-06-01 23:29:41.911504 | orchestrator | | volumes_attached | delete_on_termination='False', id='7ccd8bdb-779c-4f8b-a094-82ed896d316e' | 2025-06-01 23:29:41.914699 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:42.268375 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-01 23:29:45.589826 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:45.589934 | orchestrator | | Field | Value | 2025-06-01 23:29:45.589950 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:45.589962 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-01 23:29:45.589992 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-01 23:29:45.590005 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-01 23:29:45.590103 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-01 23:29:45.590127 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-01 23:29:45.590318 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-01 23:29:45.590343 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-01 23:29:45.590363 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-01 23:29:45.590417 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-01 23:29:45.590436 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-01 23:29:45.590456 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-01 23:29:45.590470 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-01 23:29:45.590493 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-01 23:29:45.590504 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-01 23:29:45.590516 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-01 23:29:45.590527 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-01T23:26:07.000000 | 2025-06-01 23:29:45.590538 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-01 23:29:45.590549 | orchestrator | | accessIPv4 | | 2025-06-01 23:29:45.590560 | orchestrator | | accessIPv6 | | 2025-06-01 23:29:45.590571 | orchestrator | | addresses | auto_allocated_network=10.42.0.26, 192.168.112.127 | 2025-06-01 23:29:45.590596 | orchestrator | | config_drive | | 2025-06-01 23:29:45.590608 | orchestrator | | created | 2025-06-01T23:25:46Z | 2025-06-01 23:29:45.590619 | orchestrator | | description | None | 2025-06-01 23:29:45.590636 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-01 23:29:45.590647 | orchestrator | | hostId | b1d52fd3dd6a475af6b3014d7f822356b7034cac4f95aba5ea1a8c26 | 2025-06-01 23:29:45.590658 | orchestrator | | host_status | None | 2025-06-01 23:29:45.590669 | orchestrator | | id | e1e3b923-725d-4c6e-929d-326571f62ad2 | 2025-06-01 23:29:45.590680 | orchestrator | | image | Cirros 0.6.2 (1c78d65b-f984-43bf-99d4-374ba439d24b) | 2025-06-01 23:29:45.590691 | orchestrator | | key_name | test | 2025-06-01 23:29:45.590702 | orchestrator | | locked | False | 2025-06-01 23:29:45.590712 | orchestrator | | locked_reason | None | 2025-06-01 23:29:45.590724 | orchestrator | | name | test-1 | 2025-06-01 23:29:45.590746 | orchestrator | | pinned_availability_zone | None | 2025-06-01 23:29:45.590758 | orchestrator | | progress | 0 | 2025-06-01 23:29:45.590780 | orchestrator | | project_id | d51bfeb7f79c4e0c8fea2baf394b3019 | 2025-06-01 23:29:45.590791 | orchestrator | | properties | hostname='test-1' | 2025-06-01 23:29:45.590802 | orchestrator | | security_groups | name='ssh' | 2025-06-01 23:29:45.590813 | orchestrator | | | name='icmp' | 2025-06-01 23:29:45.590824 | orchestrator | | server_groups | None | 2025-06-01 23:29:45.590835 | orchestrator | | status | ACTIVE | 2025-06-01 23:29:45.590846 | orchestrator | | tags | test | 2025-06-01 23:29:45.590857 | orchestrator | | trusted_image_certificates | None | 2025-06-01 23:29:45.590868 | orchestrator | | updated | 2025-06-01T23:28:19Z | 2025-06-01 23:29:45.590889 | orchestrator | | user_id | 839cd847b7ed4e6cbb21e5d688360f2a | 2025-06-01 23:29:45.590907 | orchestrator | | volumes_attached | | 2025-06-01 23:29:45.594462 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:46.017955 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-01 23:29:49.285733 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:49.285851 | orchestrator | | Field | Value | 2025-06-01 23:29:49.285867 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:49.285880 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-01 23:29:49.285892 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-01 23:29:49.285904 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-01 23:29:49.285915 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-01 23:29:49.285927 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-01 23:29:49.285938 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-01 23:29:49.285976 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-01 23:29:49.285988 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-01 23:29:49.286070 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-01 23:29:49.286086 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-01 23:29:49.286097 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-01 23:29:49.286110 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-01 23:29:49.286122 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-01 23:29:49.286134 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-01 23:29:49.286146 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-01 23:29:49.286158 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-01T23:26:47.000000 | 2025-06-01 23:29:49.286225 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-01 23:29:49.286253 | orchestrator | | accessIPv4 | | 2025-06-01 23:29:49.286265 | orchestrator | | accessIPv6 | | 2025-06-01 23:29:49.286277 | orchestrator | | addresses | auto_allocated_network=10.42.0.28, 192.168.112.171 | 2025-06-01 23:29:49.286296 | orchestrator | | config_drive | | 2025-06-01 23:29:49.286308 | orchestrator | | created | 2025-06-01T23:26:27Z | 2025-06-01 23:29:49.286319 | orchestrator | | description | None | 2025-06-01 23:29:49.286330 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-01 23:29:49.286341 | orchestrator | | hostId | d75a8f01eaff8f5b1ee13c949f4eeb10579a6ccd6dcae016ab321da0 | 2025-06-01 23:29:49.286352 | orchestrator | | host_status | None | 2025-06-01 23:29:49.286363 | orchestrator | | id | e1b63f6e-54b6-4fdf-b9e0-90586b0cc0bd | 2025-06-01 23:29:49.286381 | orchestrator | | image | Cirros 0.6.2 (1c78d65b-f984-43bf-99d4-374ba439d24b) | 2025-06-01 23:29:49.286392 | orchestrator | | key_name | test | 2025-06-01 23:29:49.286408 | orchestrator | | locked | False | 2025-06-01 23:29:49.286419 | orchestrator | | locked_reason | None | 2025-06-01 23:29:49.286431 | orchestrator | | name | test-2 | 2025-06-01 23:29:49.286448 | orchestrator | | pinned_availability_zone | None | 2025-06-01 23:29:49.286460 | orchestrator | | progress | 0 | 2025-06-01 23:29:49.286471 | orchestrator | | project_id | d51bfeb7f79c4e0c8fea2baf394b3019 | 2025-06-01 23:29:49.286482 | orchestrator | | properties | hostname='test-2' | 2025-06-01 23:29:49.286493 | orchestrator | | security_groups | name='ssh' | 2025-06-01 23:29:49.286504 | orchestrator | | | name='icmp' | 2025-06-01 23:29:49.286522 | orchestrator | | server_groups | None | 2025-06-01 23:29:49.286533 | orchestrator | | status | ACTIVE | 2025-06-01 23:29:49.286544 | orchestrator | | tags | test | 2025-06-01 23:29:49.286560 | orchestrator | | trusted_image_certificates | None | 2025-06-01 23:29:49.286572 | orchestrator | | updated | 2025-06-01T23:28:24Z | 2025-06-01 23:29:49.286588 | orchestrator | | user_id | 839cd847b7ed4e6cbb21e5d688360f2a | 2025-06-01 23:29:49.286599 | orchestrator | | volumes_attached | | 2025-06-01 23:29:49.291240 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:49.611804 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-01 23:29:52.750721 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:52.750827 | orchestrator | | Field | Value | 2025-06-01 23:29:52.750844 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:52.750880 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-01 23:29:52.750892 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-01 23:29:52.750904 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-01 23:29:52.750930 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-01 23:29:52.750942 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-01 23:29:52.750953 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-01 23:29:52.750964 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-01 23:29:52.750975 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-01 23:29:52.751003 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-01 23:29:52.751015 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-01 23:29:52.751026 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-01 23:29:52.751045 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-01 23:29:52.751056 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-01 23:29:52.751067 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-01 23:29:52.751078 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-01 23:29:52.751094 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-01T23:27:20.000000 | 2025-06-01 23:29:52.751105 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-01 23:29:52.751116 | orchestrator | | accessIPv4 | | 2025-06-01 23:29:52.751127 | orchestrator | | accessIPv6 | | 2025-06-01 23:29:52.751138 | orchestrator | | addresses | auto_allocated_network=10.42.0.16, 192.168.112.130 | 2025-06-01 23:29:52.751156 | orchestrator | | config_drive | | 2025-06-01 23:29:52.751176 | orchestrator | | created | 2025-06-01T23:27:04Z | 2025-06-01 23:29:52.751239 | orchestrator | | description | None | 2025-06-01 23:29:52.751250 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-01 23:29:52.751261 | orchestrator | | hostId | f22bf07d99308ecb4b3c3247ba1a357927bd819c171caaa30a4e7fdb | 2025-06-01 23:29:52.751272 | orchestrator | | host_status | None | 2025-06-01 23:29:52.751283 | orchestrator | | id | 4cead3dd-f53b-4fba-ba7e-e6c37126bc3f | 2025-06-01 23:29:52.751299 | orchestrator | | image | Cirros 0.6.2 (1c78d65b-f984-43bf-99d4-374ba439d24b) | 2025-06-01 23:29:52.751311 | orchestrator | | key_name | test | 2025-06-01 23:29:52.751322 | orchestrator | | locked | False | 2025-06-01 23:29:52.751333 | orchestrator | | locked_reason | None | 2025-06-01 23:29:52.751344 | orchestrator | | name | test-3 | 2025-06-01 23:29:52.751375 | orchestrator | | pinned_availability_zone | None | 2025-06-01 23:29:52.751387 | orchestrator | | progress | 0 | 2025-06-01 23:29:52.751398 | orchestrator | | project_id | d51bfeb7f79c4e0c8fea2baf394b3019 | 2025-06-01 23:29:52.751409 | orchestrator | | properties | hostname='test-3' | 2025-06-01 23:29:52.751420 | orchestrator | | security_groups | name='ssh' | 2025-06-01 23:29:52.751431 | orchestrator | | | name='icmp' | 2025-06-01 23:29:52.751442 | orchestrator | | server_groups | None | 2025-06-01 23:29:52.751453 | orchestrator | | status | ACTIVE | 2025-06-01 23:29:52.751464 | orchestrator | | tags | test | 2025-06-01 23:29:52.751476 | orchestrator | | trusted_image_certificates | None | 2025-06-01 23:29:52.751495 | orchestrator | | updated | 2025-06-01T23:28:28Z | 2025-06-01 23:29:52.751519 | orchestrator | | user_id | 839cd847b7ed4e6cbb21e5d688360f2a | 2025-06-01 23:29:52.751530 | orchestrator | | volumes_attached | | 2025-06-01 23:29:52.759043 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:53.110509 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-01 23:29:56.312637 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:56.312772 | orchestrator | | Field | Value | 2025-06-01 23:29:56.312784 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:56.312792 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-01 23:29:56.312819 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-01 23:29:56.312826 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-01 23:29:56.312833 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-01 23:29:56.312840 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-01 23:29:56.312870 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-01 23:29:56.312878 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-01 23:29:56.312884 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-01 23:29:56.312908 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-01 23:29:56.312915 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-01 23:29:56.312922 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-01 23:29:56.312929 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-01 23:29:56.312935 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-01 23:29:56.312946 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-01 23:29:56.312953 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-01 23:29:56.312966 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-01T23:27:58.000000 | 2025-06-01 23:29:56.312972 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-01 23:29:56.312979 | orchestrator | | accessIPv4 | | 2025-06-01 23:29:56.312986 | orchestrator | | accessIPv6 | | 2025-06-01 23:29:56.312993 | orchestrator | | addresses | auto_allocated_network=10.42.0.48, 192.168.112.167 | 2025-06-01 23:29:56.313004 | orchestrator | | config_drive | | 2025-06-01 23:29:56.313011 | orchestrator | | created | 2025-06-01T23:27:42Z | 2025-06-01 23:29:56.313018 | orchestrator | | description | None | 2025-06-01 23:29:56.313025 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-01 23:29:56.313036 | orchestrator | | hostId | b1d52fd3dd6a475af6b3014d7f822356b7034cac4f95aba5ea1a8c26 | 2025-06-01 23:29:56.313042 | orchestrator | | host_status | None | 2025-06-01 23:29:56.313054 | orchestrator | | id | 82b6ac01-8283-4de7-989f-6a9fb39994f6 | 2025-06-01 23:29:56.313061 | orchestrator | | image | Cirros 0.6.2 (1c78d65b-f984-43bf-99d4-374ba439d24b) | 2025-06-01 23:29:56.313068 | orchestrator | | key_name | test | 2025-06-01 23:29:56.313075 | orchestrator | | locked | False | 2025-06-01 23:29:56.313082 | orchestrator | | locked_reason | None | 2025-06-01 23:29:56.313089 | orchestrator | | name | test-4 | 2025-06-01 23:29:56.313100 | orchestrator | | pinned_availability_zone | None | 2025-06-01 23:29:56.313107 | orchestrator | | progress | 0 | 2025-06-01 23:29:56.313115 | orchestrator | | project_id | d51bfeb7f79c4e0c8fea2baf394b3019 | 2025-06-01 23:29:56.313121 | orchestrator | | properties | hostname='test-4' | 2025-06-01 23:29:56.313131 | orchestrator | | security_groups | name='ssh' | 2025-06-01 23:29:56.313143 | orchestrator | | | name='icmp' | 2025-06-01 23:29:56.313150 | orchestrator | | server_groups | None | 2025-06-01 23:29:56.313156 | orchestrator | | status | ACTIVE | 2025-06-01 23:29:56.313163 | orchestrator | | tags | test | 2025-06-01 23:29:56.313169 | orchestrator | | trusted_image_certificates | None | 2025-06-01 23:29:56.313176 | orchestrator | | updated | 2025-06-01T23:28:33Z | 2025-06-01 23:29:56.313240 | orchestrator | | user_id | 839cd847b7ed4e6cbb21e5d688360f2a | 2025-06-01 23:29:56.313249 | orchestrator | | volumes_attached | | 2025-06-01 23:29:56.319058 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-01 23:29:56.661978 | orchestrator | + server_ping 2025-06-01 23:29:56.662783 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-01 23:29:56.663520 | orchestrator | ++ tr -d '\r' 2025-06-01 23:29:59.715576 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-01 23:29:59.715705 | orchestrator | + ping -c3 192.168.112.167 2025-06-01 23:29:59.730682 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2025-06-01 23:29:59.730783 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=8.21 ms 2025-06-01 23:30:00.726167 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=2.34 ms 2025-06-01 23:30:01.727542 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=2.10 ms 2025-06-01 23:30:01.727675 | orchestrator | 2025-06-01 23:30:01.727686 | orchestrator | --- 192.168.112.167 ping statistics --- 2025-06-01 23:30:01.727695 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-01 23:30:01.727703 | orchestrator | rtt min/avg/max/mdev = 2.095/4.214/8.205/2.823 ms 2025-06-01 23:30:01.728017 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-01 23:30:01.728035 | orchestrator | + ping -c3 192.168.112.171 2025-06-01 23:30:01.741907 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2025-06-01 23:30:01.741933 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=8.32 ms 2025-06-01 23:30:02.737536 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=2.67 ms 2025-06-01 23:30:03.738568 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=2.47 ms 2025-06-01 23:30:03.738699 | orchestrator | 2025-06-01 23:30:03.738714 | orchestrator | --- 192.168.112.171 ping statistics --- 2025-06-01 23:30:03.738727 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-01 23:30:03.738739 | orchestrator | rtt min/avg/max/mdev = 2.468/4.484/8.318/2.712 ms 2025-06-01 23:30:03.739002 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-01 23:30:03.739025 | orchestrator | + ping -c3 192.168.112.127 2025-06-01 23:30:03.754678 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2025-06-01 23:30:03.754701 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=10.1 ms 2025-06-01 23:30:04.749352 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.89 ms 2025-06-01 23:30:05.750180 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.45 ms 2025-06-01 23:30:05.750330 | orchestrator | 2025-06-01 23:30:05.750345 | orchestrator | --- 192.168.112.127 ping statistics --- 2025-06-01 23:30:05.750356 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-01 23:30:05.750364 | orchestrator | rtt min/avg/max/mdev = 2.452/5.138/10.068/3.490 ms 2025-06-01 23:30:05.750755 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-01 23:30:05.750793 | orchestrator | + ping -c3 192.168.112.130 2025-06-01 23:30:05.766165 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2025-06-01 23:30:05.766285 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=10.4 ms 2025-06-01 23:30:06.759962 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.92 ms 2025-06-01 23:30:07.760613 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=2.21 ms 2025-06-01 23:30:07.760743 | orchestrator | 2025-06-01 23:30:07.760759 | orchestrator | --- 192.168.112.130 ping statistics --- 2025-06-01 23:30:07.760772 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-01 23:30:07.760785 | orchestrator | rtt min/avg/max/mdev = 2.212/5.172/10.388/3.699 ms 2025-06-01 23:30:07.761082 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-01 23:30:07.761106 | orchestrator | + ping -c3 192.168.112.116 2025-06-01 23:30:07.773426 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2025-06-01 23:30:07.773488 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=7.19 ms 2025-06-01 23:30:08.770821 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.42 ms 2025-06-01 23:30:09.772678 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=2.31 ms 2025-06-01 23:30:09.772817 | orchestrator | 2025-06-01 23:30:09.772833 | orchestrator | --- 192.168.112.116 ping statistics --- 2025-06-01 23:30:09.772847 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-01 23:30:09.772859 | orchestrator | rtt min/avg/max/mdev = 2.309/3.972/7.192/2.277 ms 2025-06-01 23:30:09.772870 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-01 23:30:10.034165 | orchestrator | ok: Runtime: 0:10:31.906076 2025-06-01 23:30:10.085304 | 2025-06-01 23:30:10.085454 | TASK [Run tempest] 2025-06-01 23:30:10.620488 | orchestrator | skipping: Conditional result was False 2025-06-01 23:30:10.638998 | 2025-06-01 23:30:10.639246 | TASK [Check prometheus alert status] 2025-06-01 23:30:11.190324 | orchestrator | skipping: Conditional result was False 2025-06-01 23:30:11.193916 | 2025-06-01 23:30:11.194091 | PLAY RECAP 2025-06-01 23:30:11.194299 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-01 23:30:11.194377 | 2025-06-01 23:30:11.441029 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-01 23:30:11.443651 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-01 23:30:12.225274 | 2025-06-01 23:30:12.225457 | PLAY [Post output play] 2025-06-01 23:30:12.242319 | 2025-06-01 23:30:12.242478 | LOOP [stage-output : Register sources] 2025-06-01 23:30:12.311862 | 2025-06-01 23:30:12.312190 | TASK [stage-output : Check sudo] 2025-06-01 23:30:13.289036 | orchestrator | sudo: a password is required 2025-06-01 23:30:13.374580 | orchestrator | ok: Runtime: 0:00:00.135899 2025-06-01 23:30:13.383526 | 2025-06-01 23:30:13.383682 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-01 23:30:13.417812 | 2025-06-01 23:30:13.418082 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-01 23:30:13.485199 | orchestrator | ok 2025-06-01 23:30:13.493406 | 2025-06-01 23:30:13.493535 | LOOP [stage-output : Ensure target folders exist] 2025-06-01 23:30:13.935444 | orchestrator | ok: "docs" 2025-06-01 23:30:13.935749 | 2025-06-01 23:30:14.183864 | orchestrator | ok: "artifacts" 2025-06-01 23:30:14.433117 | orchestrator | ok: "logs" 2025-06-01 23:30:14.458292 | 2025-06-01 23:30:14.458470 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-01 23:30:14.499187 | 2025-06-01 23:30:14.499497 | TASK [stage-output : Make all log files readable] 2025-06-01 23:30:14.794025 | orchestrator | ok 2025-06-01 23:30:14.804891 | 2025-06-01 23:30:14.805083 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-01 23:30:14.840869 | orchestrator | skipping: Conditional result was False 2025-06-01 23:30:14.858250 | 2025-06-01 23:30:14.858421 | TASK [stage-output : Discover log files for compression] 2025-06-01 23:30:14.883253 | orchestrator | skipping: Conditional result was False 2025-06-01 23:30:14.897018 | 2025-06-01 23:30:14.897236 | LOOP [stage-output : Archive everything from logs] 2025-06-01 23:30:14.939249 | 2025-06-01 23:30:14.939419 | PLAY [Post cleanup play] 2025-06-01 23:30:14.947358 | 2025-06-01 23:30:14.947471 | TASK [Set cloud fact (Zuul deployment)] 2025-06-01 23:30:15.008297 | orchestrator | ok 2025-06-01 23:30:15.021802 | 2025-06-01 23:30:15.021937 | TASK [Set cloud fact (local deployment)] 2025-06-01 23:30:15.056597 | orchestrator | skipping: Conditional result was False 2025-06-01 23:30:15.070313 | 2025-06-01 23:30:15.070460 | TASK [Clean the cloud environment] 2025-06-01 23:30:15.732437 | orchestrator | 2025-06-01 23:30:15 - clean up servers 2025-06-01 23:30:16.652180 | orchestrator | 2025-06-01 23:30:16 - testbed-manager 2025-06-01 23:30:16.742822 | orchestrator | 2025-06-01 23:30:16 - testbed-node-5 2025-06-01 23:30:16.840527 | orchestrator | 2025-06-01 23:30:16 - testbed-node-2 2025-06-01 23:30:16.943927 | orchestrator | 2025-06-01 23:30:16 - testbed-node-0 2025-06-01 23:30:17.038639 | orchestrator | 2025-06-01 23:30:17 - testbed-node-3 2025-06-01 23:30:17.147304 | orchestrator | 2025-06-01 23:30:17 - testbed-node-4 2025-06-01 23:30:17.248212 | orchestrator | 2025-06-01 23:30:17 - testbed-node-1 2025-06-01 23:30:17.341371 | orchestrator | 2025-06-01 23:30:17 - clean up keypairs 2025-06-01 23:30:17.361624 | orchestrator | 2025-06-01 23:30:17 - testbed 2025-06-01 23:30:17.403023 | orchestrator | 2025-06-01 23:30:17 - wait for servers to be gone 2025-06-01 23:30:28.285790 | orchestrator | 2025-06-01 23:30:28 - clean up ports 2025-06-01 23:30:28.484932 | orchestrator | 2025-06-01 23:30:28 - 18071a85-dea1-40aa-bf27-196bb74f3cf9 2025-06-01 23:30:28.766985 | orchestrator | 2025-06-01 23:30:28 - 1c1ac932-7e49-4613-8c1a-1da9160dc074 2025-06-01 23:30:29.022872 | orchestrator | 2025-06-01 23:30:29 - 22496197-7f8a-45f6-96f3-fa2eb73194e6 2025-06-01 23:30:29.249137 | orchestrator | 2025-06-01 23:30:29 - 7bf2f14b-4afe-40fa-b7fd-83eb895ae8ff 2025-06-01 23:30:29.474541 | orchestrator | 2025-06-01 23:30:29 - cd60691d-f968-4085-847c-1ec6c9adc227 2025-06-01 23:30:29.904852 | orchestrator | 2025-06-01 23:30:29 - d1177117-0366-4205-ade2-2abce22fb034 2025-06-01 23:30:30.111570 | orchestrator | 2025-06-01 23:30:30 - dd490ff8-8eb8-4b79-b69e-9aabd54c8042 2025-06-01 23:30:30.376185 | orchestrator | 2025-06-01 23:30:30 - clean up volumes 2025-06-01 23:30:30.510329 | orchestrator | 2025-06-01 23:30:30 - testbed-volume-2-node-base 2025-06-01 23:30:30.551067 | orchestrator | 2025-06-01 23:30:30 - testbed-volume-5-node-base 2025-06-01 23:30:30.592684 | orchestrator | 2025-06-01 23:30:30 - testbed-volume-1-node-base 2025-06-01 23:30:30.636844 | orchestrator | 2025-06-01 23:30:30 - testbed-volume-0-node-base 2025-06-01 23:30:30.677322 | orchestrator | 2025-06-01 23:30:30 - testbed-volume-3-node-base 2025-06-01 23:30:30.718926 | orchestrator | 2025-06-01 23:30:30 - testbed-volume-4-node-base 2025-06-01 23:30:30.763115 | orchestrator | 2025-06-01 23:30:30 - testbed-volume-manager-base 2025-06-01 23:30:30.802755 | orchestrator | 2025-06-01 23:30:30 - testbed-volume-7-node-4 2025-06-01 23:30:30.845525 | orchestrator | 2025-06-01 23:30:30 - testbed-volume-3-node-3 2025-06-01 23:30:30.888733 | orchestrator | 2025-06-01 23:30:30 - testbed-volume-6-node-3 2025-06-01 23:30:30.928022 | orchestrator | 2025-06-01 23:30:30 - testbed-volume-2-node-5 2025-06-01 23:30:30.972299 | orchestrator | 2025-06-01 23:30:30 - testbed-volume-1-node-4 2025-06-01 23:30:31.012810 | orchestrator | 2025-06-01 23:30:31 - testbed-volume-8-node-5 2025-06-01 23:30:31.056222 | orchestrator | 2025-06-01 23:30:31 - testbed-volume-0-node-3 2025-06-01 23:30:31.095635 | orchestrator | 2025-06-01 23:30:31 - testbed-volume-4-node-4 2025-06-01 23:30:31.138418 | orchestrator | 2025-06-01 23:30:31 - testbed-volume-5-node-5 2025-06-01 23:30:31.180129 | orchestrator | 2025-06-01 23:30:31 - disconnect routers 2025-06-01 23:30:31.298749 | orchestrator | 2025-06-01 23:30:31 - testbed 2025-06-01 23:30:32.461736 | orchestrator | 2025-06-01 23:30:32 - clean up subnets 2025-06-01 23:30:32.510583 | orchestrator | 2025-06-01 23:30:32 - subnet-testbed-management 2025-06-01 23:30:32.732164 | orchestrator | 2025-06-01 23:30:32 - clean up networks 2025-06-01 23:30:32.945775 | orchestrator | 2025-06-01 23:30:32 - net-testbed-management 2025-06-01 23:30:33.674333 | orchestrator | 2025-06-01 23:30:33 - clean up security groups 2025-06-01 23:30:33.724967 | orchestrator | 2025-06-01 23:30:33 - testbed-management 2025-06-01 23:30:33.872082 | orchestrator | 2025-06-01 23:30:33 - testbed-node 2025-06-01 23:30:33.988928 | orchestrator | 2025-06-01 23:30:33 - clean up floating ips 2025-06-01 23:30:34.021905 | orchestrator | 2025-06-01 23:30:34 - 81.163.192.90 2025-06-01 23:30:34.404336 | orchestrator | 2025-06-01 23:30:34 - clean up routers 2025-06-01 23:30:34.528875 | orchestrator | 2025-06-01 23:30:34 - testbed 2025-06-01 23:30:36.130966 | orchestrator | ok: Runtime: 0:00:20.662319 2025-06-01 23:30:36.135361 | 2025-06-01 23:30:36.135533 | PLAY RECAP 2025-06-01 23:30:36.135658 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-01 23:30:36.135722 | 2025-06-01 23:30:36.281960 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-01 23:30:36.284015 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-01 23:30:37.038735 | 2025-06-01 23:30:37.039390 | PLAY [Cleanup play] 2025-06-01 23:30:37.062004 | 2025-06-01 23:30:37.062321 | TASK [Set cloud fact (Zuul deployment)] 2025-06-01 23:30:37.118701 | orchestrator | ok 2025-06-01 23:30:37.130795 | 2025-06-01 23:30:37.131034 | TASK [Set cloud fact (local deployment)] 2025-06-01 23:30:37.166739 | orchestrator | skipping: Conditional result was False 2025-06-01 23:30:37.184022 | 2025-06-01 23:30:37.184215 | TASK [Clean the cloud environment] 2025-06-01 23:30:38.283281 | orchestrator | 2025-06-01 23:30:38 - clean up servers 2025-06-01 23:30:38.879316 | orchestrator | 2025-06-01 23:30:38 - clean up keypairs 2025-06-01 23:30:38.895826 | orchestrator | 2025-06-01 23:30:38 - wait for servers to be gone 2025-06-01 23:30:38.936763 | orchestrator | 2025-06-01 23:30:38 - clean up ports 2025-06-01 23:30:39.009571 | orchestrator | 2025-06-01 23:30:39 - clean up volumes 2025-06-01 23:30:39.069985 | orchestrator | 2025-06-01 23:30:39 - disconnect routers 2025-06-01 23:30:39.094310 | orchestrator | 2025-06-01 23:30:39 - clean up subnets 2025-06-01 23:30:39.111710 | orchestrator | 2025-06-01 23:30:39 - clean up networks 2025-06-01 23:30:39.268256 | orchestrator | 2025-06-01 23:30:39 - clean up security groups 2025-06-01 23:30:39.322605 | orchestrator | 2025-06-01 23:30:39 - clean up floating ips 2025-06-01 23:30:39.364375 | orchestrator | 2025-06-01 23:30:39 - clean up routers 2025-06-01 23:30:39.725811 | orchestrator | ok: Runtime: 0:00:01.455431 2025-06-01 23:30:39.729354 | 2025-06-01 23:30:39.729510 | PLAY RECAP 2025-06-01 23:30:39.729630 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-01 23:30:39.729692 | 2025-06-01 23:30:39.858103 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-01 23:30:39.860578 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-01 23:30:40.613451 | 2025-06-01 23:30:40.613627 | PLAY [Base post-fetch] 2025-06-01 23:30:40.629369 | 2025-06-01 23:30:40.629517 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-01 23:30:40.684957 | orchestrator | skipping: Conditional result was False 2025-06-01 23:30:40.698780 | 2025-06-01 23:30:40.699033 | TASK [fetch-output : Set log path for single node] 2025-06-01 23:30:40.747560 | orchestrator | ok 2025-06-01 23:30:40.756650 | 2025-06-01 23:30:40.756834 | LOOP [fetch-output : Ensure local output dirs] 2025-06-01 23:30:41.290783 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/2b9a5e4242674de7b2ac603f8dfc33e2/work/logs" 2025-06-01 23:30:41.562537 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2b9a5e4242674de7b2ac603f8dfc33e2/work/artifacts" 2025-06-01 23:30:41.830733 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2b9a5e4242674de7b2ac603f8dfc33e2/work/docs" 2025-06-01 23:30:41.845205 | 2025-06-01 23:30:41.845352 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-01 23:30:42.897339 | orchestrator | changed: .d..t...... ./ 2025-06-01 23:30:42.897746 | orchestrator | changed: All items complete 2025-06-01 23:30:42.897817 | 2025-06-01 23:30:43.676441 | orchestrator | changed: .d..t...... ./ 2025-06-01 23:30:44.476758 | orchestrator | changed: .d..t...... ./ 2025-06-01 23:30:44.500628 | 2025-06-01 23:30:44.500788 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-01 23:30:44.538592 | orchestrator | skipping: Conditional result was False 2025-06-01 23:30:44.542235 | orchestrator | skipping: Conditional result was False 2025-06-01 23:30:44.568440 | 2025-06-01 23:30:44.568589 | PLAY RECAP 2025-06-01 23:30:44.568681 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-01 23:30:44.568728 | 2025-06-01 23:30:44.747864 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-01 23:30:44.750705 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-01 23:30:45.495003 | 2025-06-01 23:30:45.495185 | PLAY [Base post] 2025-06-01 23:30:45.509989 | 2025-06-01 23:30:45.510179 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-01 23:30:46.528861 | orchestrator | changed 2025-06-01 23:30:46.539557 | 2025-06-01 23:30:46.539692 | PLAY RECAP 2025-06-01 23:30:46.539776 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-01 23:30:46.539887 | 2025-06-01 23:30:46.661179 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-01 23:30:46.663810 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-01 23:30:47.477920 | 2025-06-01 23:30:47.478108 | PLAY [Base post-logs] 2025-06-01 23:30:47.489566 | 2025-06-01 23:30:47.489742 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-01 23:30:47.947405 | localhost | changed 2025-06-01 23:30:47.965073 | 2025-06-01 23:30:47.965313 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-01 23:30:47.992813 | localhost | ok 2025-06-01 23:30:47.998667 | 2025-06-01 23:30:47.998819 | TASK [Set zuul-log-path fact] 2025-06-01 23:30:48.015986 | localhost | ok 2025-06-01 23:30:48.034384 | 2025-06-01 23:30:48.034555 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-01 23:30:48.073237 | localhost | ok 2025-06-01 23:30:48.080320 | 2025-06-01 23:30:48.080509 | TASK [upload-logs : Create log directories] 2025-06-01 23:30:48.582269 | localhost | changed 2025-06-01 23:30:48.587006 | 2025-06-01 23:30:48.587194 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-01 23:30:49.121683 | localhost -> localhost | ok: Runtime: 0:00:00.007308 2025-06-01 23:30:49.131684 | 2025-06-01 23:30:49.131896 | TASK [upload-logs : Upload logs to log server] 2025-06-01 23:30:49.708315 | localhost | Output suppressed because no_log was given 2025-06-01 23:30:49.710757 | 2025-06-01 23:30:49.710953 | LOOP [upload-logs : Compress console log and json output] 2025-06-01 23:30:49.771289 | localhost | skipping: Conditional result was False 2025-06-01 23:30:49.776570 | localhost | skipping: Conditional result was False 2025-06-01 23:30:49.783821 | 2025-06-01 23:30:49.784045 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-01 23:30:49.834967 | localhost | skipping: Conditional result was False 2025-06-01 23:30:49.835606 | 2025-06-01 23:30:49.839236 | localhost | skipping: Conditional result was False 2025-06-01 23:30:49.846920 | 2025-06-01 23:30:49.847149 | LOOP [upload-logs : Upload console log and json output]